TCPCP Indent Start
Alright. Welcome to the cycling performance club Podcast.
Damian: The podcast where scientists, pro-cyclists, and cutting-edge coaches discuss topics in training, performance, science, and all things cycling.
Cyrus: The show is co-hosted by me, Cyrus Monk who is a professional cyclist and cycling coach
Jason: me, Dr. Jason Boynton who is a sports scientist and cycling coach,
Damian: And then there’s me, Damian Ruse, a professional cycling coach…
Jason: Today on the podcast, we have Dr. Tuen van Erp, performance scientist with INEOS Grenadiers.
Damian: And we’re talking about how World Tour rides actually train. Well, that’s the overarching theme, and we do get into the details of this but we’re breaking this topic into two episodes - the first one is on training load measures and the second gets further into the details of how world tour riders actually train.
In part 1 - Jason, Cyrus and Tuen take aim at Training Load in training and racing. One of the measures discussed is TSS - you know the Training Stress Score from Training Peaks. Tuen is better placed than probably anyone in performance cycling to talk about TSS, he’s spent a lot of time looking at TSS to understand its limitations in varying situations and across different intensities. And like he says, about sports science in general - not just TSS - working with any measure is about choosing something to work with - knowing its limitations and working with them as best as possible to keep moving forward.
TCPCP Indent End
Intro Indent Start
Damian: If you’ve listened to the show long enough, you’ll know we spend a lot of time examining assumptions around cycling performance. That we've taken it upon ourselves to find and relay the truth from as close to the source as possible. This way we know we’re bringing the best in cycling performance knowledge to the greater cycling community- because with better knowledge, comes a better sport.
Sometimes this message gets lost and it might seem like we are critical just for the sake of it - or we compare existing solutions with ideal, perfect ones—which are often unrealistic. But we are well aware of the challenges in transferring scientific findings into real world performance - and we are well aware of the Nirvana fallacy, the idea that when considering alternatives to what is already being done, it's important to compare real-world alternatives and not an imagined perfect solution that doesn't exist.
Tuen - was a bit of a kick in the pants in this regard. He is the anthissis of this - he is an experienced researcher with a very pragmatic stance on using what is available by understanding the limitations and communicating them. So instead of putting time into developing a new training load measure - which is another challenge altogether, he has put a lot of time into understanding the relationship between what’s out there, and real world performance - at the top of the sport. He understands that even at the top level of cycling there are many challenges to translating best practice into real world solutions and performance.
In this episode we come across a lot of moments where, as Voltaire wrote in 1772 “Le mieux est l’ennemi du bien” —which translates to “better is the enemy of the good”, but is often translated as “perfect is the enemy of the good”. So I wanted to send you off into this episode with this in mind. Because even if things aren’t a perfect solution, if the alternative is having or doing nothing - then the hope is we are still learning and improving cycling and cycling performance by using an imperfect solution.
Jason: (01:13) So a little bit of background on how you came across the podcast, or how we know you is, after we finished our interview with Dajo. (06:22) So you guys come dial. Die. Oh, yeah. So. So I've been mispronouncing his name.
Tuen van Erp: (06:31) Yeah, but it's difficult. I think for not Dutch guys.
Jason: (01:13) I asked him, who would you like to see on the podcast? And or do you know anyone that would like to come on the podcast? And I knew his answer. Pretty much. I didn't even have to really ask him. I just knew I knew who he was gonna say. I knew he's gonna say, you should have Tuen come on. I knew he was gonna say, but it was when he said that I was pretty excited about it.
Damian: Why is Jason excited? Well, let’s do a quick round up of Tuen’s history in cycling performance. We’re talking about a sports scientist with 15 published articles on professional cycling from 2010-2021, 9 years of experience with the World Tour cycling team - Team DSM and its previous incarnations.
And let’s take a moment to talk about the long road to getting a place in sports science at the World Tour level.
We have the degrees - BSc Physiotherapy, MSc in Human Movement Science and a PhD. But here’s where it gets really interesting.
It started with unpaid internships, 12 months of 2 days per week at the Dutch Olympic Committee and then 12 months for free with Skill-Shimano cycling team. Landing a fulltime job at team Argos-Shimano at the end of that period. This turned into 9 years as an embedded scientist within the same team. Before shifting to a postdoctoral research position for 2 years before his current position at INEOS comes up - and here’s some inside information about how difficult and competitive applying for this type of position was.
Advertised in early 2021. Nearly 100 candidates applied for the position. This was whittled down to 60 candidates after their initial criteria of a PhD and 5 years of experience working in the field. This was followed by the review of an administrative panel, which was then followed by the review of a technical professional panel. In the end Tuen was successful - and honestly, you can see why. And going from having to “prove the added value of a sports scientist in an elite sport” and working for free to 10 years later the barrier of entry is so - a good thing for the sport.
So after all of that, you can understand why Jason was…
Jason: (01:13) So happy to have this conversation now.
Jason: (13:35) Um, so anyways, the purpose of this episode is we're trying, we're going to try to formulate a better picture of how professional cyclists, particularly those at the World Tour level, we want to look and see how they actually train and race and in terms of people to talk to, let's see, you're probably high up there in that tier in terms of researchers that have an idea of how these, this level of cyclists is actually performing and races and how they are training.
And so first of all, ask you the question, Why, why, why would we do research on pro cyclists like this? Is it important?
Tuen van Erp: (14:29) Why? Yeah, yeah, I think it's fair. Yeah. I think it's important, especially that some of the data I published is known by the World Tour teams, but they are the only guys who know it. So I think it's important to publish for the coaches and the riders below that. But that's more in general to share the knowledge. And I think it will bring us further right. I mean, I did a paper one year ago about fatigue resistance. And now, it's really good to see. But now one year later I, it comes back in podcast and like people talk about it. And it makes an extra step and like how we can use the data I, when I started in 2011, like, the whole power data thing was pretty new. And we have to, we have to get involved in these kinds of things. And now I know fatigue resistance is a pretty hot topic. So I know other people are working on that, and I only looked at kilojoules, but probably the intensity is way more important than the endurance. So we can improve cycling.
And yeah, I have a lot of reasons why we should like why we should look at it. When I start working on it, for example, all those training load measures are now figuring out they are all not really meant for cycling. So we should look into that as well to see if we can improve and think when we start sharing data and ideas, or the other people will use that data and ideas to make other ideas, which are better or more advanced. So I think that's yeah, I think that's the reason just to get the whole cycling community more. Yeah, on a higher level.
Jason: (16:25) Yeah, for sure. I particularly like having that data out there for you know, I'm working with a guy that's aspiring to be a pro. So it's put that paper in front of them and say, Look, this is these are, these are at least targets of where you should be expecting to train. And in order to hit the level of performance that you want to be at. So it's just nice to have that.
Tuen van Erp: (16:56) Yeah. And there is not so much out there. But I think in general amateurs are like, upcoming cyclists, they feel they always have to train hard. At least, you know, you see your paper, for example, that they do 60% in Zone One, or what I call zone one. So it's also for those coaches working in a little bit lower level take, you could share, like his data from professionals, they are not always going full gas, and that they are taking recovery days. And they you know, so it can help you guys to disguise, to convince that rider that it is okay to train easy or to take recovery days, because in the old days you feel the old cycling days, they never took recovery days, and they do crazy amounts of distances. And yeah, well, it's not really the case anymore.
Jason: (17:52) Yeah. So one of the things that we should kind of make apparent to the listeners, is one of the reasons why it's taken so long for a lot of these papers that come out, I think the earliest papers might have been early 2000s, late 90s, that we're kind of looking at this level of riders, I mean, Lucia
Tuen van Erp: (18:16) Lucia, yeah,
Jason: (18:18) His the name comes to mind when looking at these types of this cohort. And so it's actually really difficult to do any kind of science on this cohort. And so, a lot of times the data, the research that comes out of it is something like either a retrospective study, where you just kind of look at the data that's already there, which is what most of your research is, if I'm not mistaken, there's a lot, a lot of the stuff we're going to talk about today is just retrospective, like you had the data, I'm gonna go and look back at it. Or you have studies that are very, very non invasive, non-invasive prospective studies, kind of like some of the stuff that Dajo did during his PhD. Or you have to go down a few categories, the more invasive it is, the farther down the category in terms of rider that you have to go.
And a little bit of a discussion around this it is that what is what a professional cyclist is, is defined, like they have a certain amount of training that they do they have a certain amount of races that they have to get to and they have a certain amount of you know, and a have to, actually, obviously make an income off of it. But in terms of almost like Shoderer’s his cat, when you make the observation, you potentially change what you're observing. And that's what we have to be very aware of when it comes to researching professional cyclists because if you want to do a, say a muscle biopsy research study on a world tour cyclist and how things are looking, you know, on their taper into the Tour de France, you are now changing that definition of the professional cyclist, because now you're really messing things with that.
So these are things that we will probably never get to know. And maybe unless we find some case study or something like that, like one offs with people. And so this is where the retrospective studies come in where you have this, I think your dataset has over four years, with Sunweb, if I'm not mistaken (DSM).
Tuen van Erp: (20:42) Yeah, my PhD took four years. And then after I did, I added another four years or something like that. Now, I got more, because I was working in a team collecting data. But for my PhD, I decided, okay, I will use this data.
So, yeah, so but one thing I would like, we'll have a conversation with the difference between retrospective and prospective studies for the listeners, because I think the people that would be listening to a number of the people that will be listening to this podcast, are probably keen to read papers, and they'll probably, hopefully, go and look up your research and have a look at it.
And I think if you're outside of the sport science field, you might look at the findings for a prospective study and a retrospective study and see that they are equal. But there's limiters to each one of those types of studies, and they will tell you potentially different things. And I think we'll get into the specifics, specifics of that when we get into the individual research papers. But you know, some of the caveats when you're looking at retrospective studies. Because that's all you can really do is, you know, there's gonna be limited, like, you probably weren't able to set up testing sessions, and things like that. So that you have to define zones retrospectively. Right? So, what method are you going to, you can't necessarily use a weekly threshold test to define zones in that situation, like, a lot of your stuff is the best 20 minute power for the season. And that would be how you determine FTP, like FTP, let alone versus critical power. Right. So yeah, so what when this is true, through all research, as soon as you start going into retrospective analysis, it really kind of limits what you can look at. And then, you know, you have to, you have to be careful of how certain you are about the results, I think.
Tuen van Erp 23:02
Yeah, yeah, no, you're right. But to a prospective would would have been better?
Damian: I’m jumping here to further explain the differences between retrospective and prospective studies as it helps us non-sports scientists to know what we can use from each type - so to back right up here here’s a simple explanation of the two - when looking at options for either testing hypotheses or asking specific questions, researchers choose between prospective and retrospective studies, this choice is often dictated by a few main factors, things like budget and the availability of athletes. In the case of Teun’s research…
Tuen van Erp: …it's almost impossible to do with these, like, like professional athletes, because we can, we can, we can divide them into groups, and then say, okay, group A, you're gonna do certain training and group B, you do control training, because nope it doesn't work like that.
Damian: As Tuen said, in his case, a prospective study would involve designing an intervention for professional cyclists, asking them, for example, to commit to the researchers training prescriptions for a set period of time - you can see the issue with that.
Tuen van Erp: So the retrospective is, yeah, it's, I think it's the only way, especially with these big data sets to do it. In my case, it would have been slightly better if I would have had the performance that I like, a step test in the lab to define the zones. But still, I think it will give a really good idea about the training and erasing and the demands, although designs will maybe be slightly a little bit off. But yeah, in this case, was it's the only way to get these kinds of large data sets and to do this kind of research, although prospective would have been better, I think,
Damian: I will say that retrospective studies have the advantage of being quicker to turn around - there’s no designing studies, recruiting participants, collecting baseline data before the research subjects undergo a specific intervention. This might be why we are seeing a lot of these of late, they are easier to churn out, and they certainly have their place next to prospective studies - but in my opinion, it’s the prospective studies we really need to move performance science forward.
yeah, it gets into this. What I have done here is, there's one side of it if you want to realise the limitations of the research. But on the other side, it's not a good idea to get into what we would call the Nirvana fallacy. And I hear the Nirvana fallacy about exercise physiology and sports science research a lot from usually people outside the field that are like engineers or something like that, where they're used to very stringent first principles and things like that. And you're just not going to get that from our field.
Tuen van Erp: No.
It's, it's some, basically, it's some kind of like I've said this before with what Just analysing athlete data on here is some kind of dim light is better at some kind of dim light on the subject where you at least know the limitations of what, what the analysis is better than nothing. Right.
Tuen van Erp: Yeah
This is the argument I get in with Paolo, when I spar and you know, using a training load model and training peaks versus not using it at all. I'm like, Well, I have a pretty good idea what the limitations of the model are. I've used it for like, a decade now. It's probably like HRV, you know? Yeah.
Tuen van Erp 25:38
Yeah. But I think a big problem is that we who do, who did a PhD who's writing papers, we understand the limitations. But I think a lot of people that work in the field don't understand the limitations. And I think there is where it goes wrong, they will use like, I did a presentation a month ago about training loads in a team and, like, not only but everybody for example, is using TSS, and they don't even know the limitations. Anything that goes wrong.
Yeah, we'll definitely talk about those.
Tuen van Erp 26:14
Definitely a lot of limitations in TSS. I read that presentation. I was trying some things and like, what did he call it in English? Like a mace or
Jason: a dumpster fire?
Tuen van Erp: Like, no, you open a can of worms.
Go down a rabbit hole. Yeah.
Tuen van Erp 26:34
Yeah, it's, but still, it's better than not using any load.
Yeah, yeah, exactly. Exactly. I'm kind of in the middle with TSS. Next topic here I want to discuss is training load and, but just to kind of prelude that the, in terms of TSS, like, on one side. You might have some like, Seiler, Steven, Seiler who's like, I remember him saying a tweet something like anyone that uses TSS doesn't understand it, or something like that. Yeah, he just really smashed it. Right.
Damian: The actual tweet from Steven Seiler is this…quote:
With apologies, the TSS just makes me LAUGH as a physiologist, CRY as a «coach» for athletes who somehow believe this made up metric is something they should follow religiously, and YAWN as an athlete who knows better and does not give the number 2 seconds worth of attention.
Tuen van Erp 27:20
But Paulo is I think the same. He's also against normalised power and TSS. Yeah.
Damian: And a little further down the same thread as the last tweet is Paolo Menespa, quote:
…Every single time I review a paper using TSS I ask to cite a validation study. Nothing. Also, consider TSS uses NP (equally never validated). BTW, I'm going to start using Seiler (2020) as a reference.
Tuen van Erp 27:33
I understand where they come from, because it's just a made up formula, or they're a made up number that says, If I do one hour and FTP, I have 100 points. Yeah. So understand where they come from. But I feel that all the other load measures also have their disadvantages. And maybe they are more backed by science. Yeah. But if you look close to them, they also have their limitations. They also have their limitations.
Although, well, on the other side, there's the other side of Seiler, which you'll see a training peaks article written by some random coach, that'll just be like, TSS is the best. And here's the reason you're like, What, no, no, it's not that either. No,
Tuen van Erp 28:22
that either. You have these. It's somewhere in the middle. It's not perfect. It's not great. It has some strengths and other things have other strengths. And the best thing is to figure out what is the best and honestly, yeah, day to day and I talked a lot about training mode. And I just look at it like man is so right for the picking for somebody to come on and do something better. But you know, the path dependency of PMC is like how hard it is to come up with a training load that has an act that's exponentially weighted, based on heart rate, yeah. How hard is it how hard we are Yeah, we
Tuen van Erp 29:08
are talking a little bit now with di yo and I still have a busy student and we are talking a little bit about it. But then if you come up with something new, we have to validate it and validate it in a good way and not validated how the other load measures are validated like a cult, because they are not validated for cycling only the study of rich die or dead fat kind of validated load meshes. And yeah, so but it's still it's pretty difficult, not difficult, but like it's still you have the feeling that you're going to you're making up something Yeah.
While you wait for that one check out our last show. We chat with Dr. Elisabetta Borgia- sports psychologist for Trek-Segafredo and coordinator of mental support for the Italian Cycling Federation- In this episode we discuss the importance of a cyclist’s emotions when pursuing peak performance in the sport. Full-time sport psychologists are one of the newest additions to the pro cycling team performance staff roster so we were very excited to hear about her role and experiences working with these athletes.
We also take a look at the details of how a specific type of therapy emerging in this space- dialectical behavioral therapy (DBT) can apply to the performance of athletes of all levels.
That episode is up now where you got this one.
Let's jump into the first of two studies here. To discuss the training moment that you've done. One is the relationship between various training load measures in elite cyclists during training, road races and time trials. And then the other one is the influence of exercise intensity on the association between kilojoules spent in various training loads and professional cycling. So that first one you did with Karl Foster, and Joss and yeah, you want to tell me as well? Yeah, yeah,
Tuen van Erp 36:54
yeah. The first one is pretty easy. I think because I followed the measures, kilojoules session, I'll be Lucius Trimp. TSS, and RPE, I selected all the files of training races and time trials for all 21 individual riders. And then I looked at all the correlations.
Damian: Did you catch that? Tuen fires off those measures from the first study pretty quickly. So if you missed it, here they are again:
This study investigated the relationship between mechanical energy spent (in kilojoules), session rating of perceived exertion, Lucia training impulse (LuTRIMP), and training stress score (TSS) in training, races, and time trials (TT).
Most of these are daily common - the one that you might not have heard of is Lucia’s TRIMP. TRIMP being short for ‘training impulse’. Lucia’s TRIMP is one of the variations of TRIMP. It is a measure of internal load uses a summated score based on the duration spent in each of the 3 HR zones, : low (<VT1), moderate (VT1 - VT2) and high (>VT2) and arbitrarily weighted for the intensity of the HR zone as 1,2,3, respectively.
And how did these measures correlate with training and racing?
Tuen van Erp 36:54
…correlate really well in training with each other. So it means you're kind of measuring the same. And then in race, as you see, the correlations are weaker with the internal load measures, such as session RPE and Lucia’s TRIMP, and it's probably because of fatigue. And you can control hydration that well you can control like if it's really warm in training, you can go in the morning but in races you can do that kind of thing. So that's why you get a little bit more variation and why it's also with die also wrote in one of his papers, it could be useful to combine the two lag measures and look at how they relate in races.
But then the most interesting one is that you saw with TSS and for example, yeah with TSS and all the load measures that the slope of the relationship was different in training compared to races which I kind of already knew when I wrote it down. Because I was trying to use the combination with session RPE and TSS as a tray as a as a yeah fatigue fitness tool in cycling two times two alpha 13 so I was the idea was TSS will stay the same and then session RPE will go will be higher when the ride was more fatigued. So you do two rides 100 TSS one you're fatigued, you will give it one you're fresh, you will give it the 12 and one you will be fatigued you will give the 15 Because you're more tired and then that relation between the two will tell me if a rider is really good shape and really, really fatigued but then I saw in races the relationship is always lower. So they will get more TSS for the same amount of session RPE. And then I dove more entities as I wrote the second paper which shows that because of the intent because of the intensity effect, the intensity factor is crap and TSS.
So for the listeners who don't know, the TSS formula looks really nice, but it's basically intensity factor squared, multiplied with the amount of hours. That's the same formula. So it means when you do right at 0.5, you multiply it with 0.5 and 00 point 25 multiplied with the amount of hours multiplied with 100. And then I noticed that if you do the same amount of kilojoules, so this is the big discussion I always get but the same amount of kilojoules with the low intensity or the high intensity, you will collect like 50% more TSS points. But externally you did the same. And that's what the second paper is about. So with all load measures, this relationship is the same in training races. And so if you burn 3700 kilojoules, I use it as an example you get to collect the same amount of session, I'll be the same amount of leisure stream. But somehow you get 50% more TSS points.
Yeah, I would say, you know, I'm okay with the load measurement, not lining up with kilojoules. In fact, I'd almost prefer it just because not all kilojoules are created equally. They come from different bioenergetic systems, and some of those bioenergetic systems are going to be more stressful. So if I see a trip, actually, as much as I would, you know, I'm not a huge TSS fanboy. When I see that I'm actually a little bit more like, Yeah, cool. I would gravitate towards that. But also say, I think there's a better way to do it.
Tuen van Erp 41:11
Yeah, for sure there is a better way. But still, it's a bit strange, because externally did the same kind of riding also your kilojoules will be higher when you do more intensity? Yeah, so it's, yeah, I don't know. It's a bit strange. And then this is because this Yeah, and then a normalised. I looked at it last month, and then the normalised power does something strange with it. And for example, with TSS what did you do one minute full gas, or you put it in a? In one hour, right? Yeah. Collect you get a lot more for oh, yeah, you put that so save one minute in a five hour? I was just like, what? Yeah,
yeah, I did a two minute effort with somebody before one of my athletes and a one hour right, and a one hour session. And I was like, Is this right? Yeah, I actually had that down. As an anecdote. I had one of my athletes do a two minute effort. And everything else was easy. I can't remember how much TSS he gained.
Tuen van Erp 42:15
Yeah, you've been saying? Yeah, it's like, could be maybe 20, 25. On a right of 50. TSS. You collect 25. Extra for two minutes. Yeah. Yeah.
But how is training peaks calculating the normal? Well, anything calculating the normalised power? So how was that number generated?
Tuen van Erp 42:35
So it's, if I'm correct, out of my head, it's a moving average of 30 seconds, then you square I don't know how to say the angles, but you scribed it for. So then you take the mean of that number, and then you do the opposite of scurried by four. So it's the root, the fourth root, that's, that's how you do it. So it's pretty strange, because the strange, if I remember correctly, from the things I did, it will take out the zeros, but then over and over, it takes low intensity takes it makes it a little bit takes out the lower intensity. But then when you do high intensity it makes it higher. I can explain it well.
Yeah. So ascent, but essentially, what it was trying to do, whether it succeeds there is whatever I was always told me that it's trying to do is basically give you a value that correlates to what it should have been if you're riding at a steady power. So for example, if you're doing something like a session that has heaps of accelerations, and you're going up and down and up and down, like whatever, the intensity is not steady at all. And you might average 200 Watts, but you normalise power is 300 It's basically saying that this is equivalent to if you just wrote out it went out and rode it 300 watts for this
Tuen van Erp 44:02
amount of time you did it. Yeah, but you didn't really ride it really? Yeah,
exactly. And that's, that's the thing, but this is what the training load stems from this normalised power, which is basically saying that these two are equivalent.
Tuen van Erp 44:18
Yeah, and, and you're right to say it's not that bad that the intensity with TSS, you get a little bit more points with the high intensity, but I also did this just in a simple Excel sheet, I calculated a lot of things. For example, I have it now in front of me to do so you're doing exactly the same amount of kilojoules. Remember that so it's, I think it's 2000 in this calculation, and if you would write on an intensity factor of 0.2, which is really easy. For seven hours, you collect 28 TSS points. I know it's really easy. But then when you do 1000 kilojoules at 0.93. So it's almost an FTP, you collect one and a half hours, you burn that same amount of kilojoules, you collect one on a 30 TSS. So it means if you write on a on a, on an intensity factor of 0.2, you have to ride for five, almost 35 with a 35 hours to collect the same amount of points as as riding one and a half hour on FTP, which, like, if you could choose athletes, okay, now you're going to write 35 hours really easy or have a one and a half hour like almost full gas. So the risk is that, in my opinion, they underestimate the low endurance rides for coaches, because they think, Ah, it's only 150, TSS five hour, right? If they do it really easy, and then, so that's a little bit, the tricky part.
And we had this is what Luke Plapp said, he came on, and he just discussed that he was seeing those numbers for his five hour rides. And obviously, someone like him with a crazy high FTP, if he's going out and his normalised power is such a small fraction of that FTP, and then that fraction is squared. He ends up with this tiny TSS value, then, so yeah, he's getting 120, 130 TSS for a five hour ride. And then yeah, that's, that's tough as an athlete to see that when you think, Well, I can get that same thing in a two hour ride if I ride hard, but yeah, are those actually equivalent? This is where all of your research, I think is going, is actually looking at the same thing.
Tuen van Erp 46:49
And the risk is that they start pushing to get that number for the coach or for their own mind. And yeah, so I'm not, I'm not saying this is wrong. Just know the limitations. Yeah.
But let's discuss the relationship here and it's escaping me now. Cuz digital brought this up when he was on the show the relationship between intensity and TSS is a quadratic one. Yeah. And so let's talk about that, because we didn't really expand on it with Dajo. But it's unusable. So I'll let you go off of that.
Tuen van Erp 47:36
That's, I think that's a little bit what I said. When I point out the differences, if you ride on 0.93, in this example, then because it's squared, you end up with 0.86 multiplied with the amount of hours multiplied with the 102, which gives you the score. So the difference between 0.93 and 0.86 is almost nothing, because it's squared. Because if you square one, you get one. But then if you square a half, you get 0.25. So it's 50% lower. So that's, that's, that's this quadratic relationship, which makes it a bit strange with TSS collect. If you do really short rides with five one minutes efforts or something, or two hours, you get really a lot of points of TSS, well, if you do a really easy ride on one on a 50 watts for these guys, for five, six hours, they only collect. Yeah, maybe 150 points, like Luke was saying, Yeah,
well, I think I'm okay. The weird thing is that, as you pointed out in your papers, there's nothing physiological that's a quadratic, that way.
Tuen van Erp 48:54
Yeah. And that, yeah, so it should be exponential. Like the lactate like, like, combined. So I, when I'm thinking about a training load measure, it should be maybe exponential with the same as the lactate curves, the same as the I trained, which was also really good. In, in Dyer study, where you say, writing a tune on one on one on it, 50 Watts, 200 Watts, 250 watts. For the pro athlete, it's kind of the same, the difference is not that that's substantial. But then when you do 300 Watts is slightly higher when you do four on like, you make it exponential. So riding, one minute on 600 Watts should give you because it is really hard for the body, it should give you a lot of points. But the difference between 200 202 150 Watts shouldn't be that big.
And also that not just that it's a quadratic, but it's a quadratic, which is centred around FTP, which also doesn't have any physiological backing. So you've got to do something that doesn't have any ground in physiology, and then you're also then just adding a quadratic on top of that. So just small margins of error then can just blow into massive discrepancies, if you don't have the FTP set correctly. And also, yeah, as we've discussed a lot on this podcast, the problems with FTP before you even get to this measure of training load.
Tuen van Erp 50:25
Yeah, yeah, that's, that's true. But I feel like doing more like the indoors tests, and it is pretty difficult. In Pro Cycling to get the zones correct. To be honest, it doesn't matter how. Now like those, like, we're doing the tests with the metabolic tests with lactate, like everything, like what you would do in a normal lap. And there you see some riders used to riding indoors. Yeah. So then you get a really low value, like I read the paper two weeks ago, which says like, indoors and outdoors are different. But if you go outdoors, you can control the weather. You can there's other things you can control. So it is pretty difficult to get it. Correct. Yeah. Yeah. All you see are profiles, which you would never see in amateurs. Like you see lactate profiles that they don't, they take a full gas, and you measure lactate, and it's not even five.
Yeah, it is where we get back to that conversation. We have Nick, his research paper on the blood lactate…So now it's, you know, to put that into practice, you know, I hear
Damian: Iñigo San Millán - UAE team Emirates coach.
Yeah. And then yeah, him saying things like, Oh, we do like lactate testing. With
Damian: Tadej Pogačar
And, you know, he rides at that threshold one a lot. And you're like, man, by the time you go through all of the air that's in that and then assume that, and all how accurate that how inaccurate that test is, and then how that changes over time, you're, you're telling me that you have that accurate enough, where that is actually stimulating him to somehow it's different from anybody else?
Tuen van Erp 53:42
I think in this case, it's this. But I think in this case, it's the same as with the load measures. If she were, if you work as a scientist, it's easy to like to control everything and you know, and then you'll, you know, you see the guys in the field doing things and you're like, why, why are they doing these stupid or wrong things and, but I think it's the same as with the training load. If you work in the field, you have to choose something. This is how we're going to do it, know the limitations and work with it as good as possible with that limitation in your background. But if we start doubting everything, when you work as a sports scientist in the field, you can better stop working because as a scientist, you learn to be critical and you can criticise everything, and then you can do anything.
Well, well, you have people working in the field like a physiotherapist, which makes them super important in every top spot. And they also do stuff they don't know about but they don't look that critical to themselves. And you know, and so we are spot scientists have to realise it as we keep criticising ourselves in the field. We will never be that we will never become that important because everybody's looking at us. Yeah, but you guys don't know anything for sure. So it's up to us to, to, to know those limitations and try to bring those limitations to the people we work with, but in a good way. And don't be limited by what? Like, by the research or by the research in the sense like, yet. Yeah, we have to move forward and not Yeah, keep stuck in those limitations.
Jason: The rest of the kind of retrospective studies here, I did want to use these studies as kind of a teaching moment, and just kind of dig a little bit deeper on your methods. So that the listeners can become a little bit more critical of the retrospective studies and what to kind of look out for. Right? So. Yeah, so for example, with TSS, right? If you are a coach, and you're a practitioner, you are hopefully testing your athletes regularly and you're getting there. Maybe you're using their 20 minute power and calculating our FTP and then that means that you're adjusting that over time. And that means that the TSS is probably a lot more accurate. However, in retrospective study, that's not possible. So how did you do that?
Tuen van Erp 59:18
So yeah, so we show how I did it. I took the best 20 minutes of that season, which we did 20 minutes testing in January, and sometimes in July. And in general, they will never get, like, I think 80 to 90% is still the FTP from that test, which also, yeah, shows that because that's the risk of taking. I could have chosen to take the best 20 minutes of one month or two month but the risk is that they never put in that 20 minutes effort in racing and training and so on. I just took the best 20 minutes and took 95% from that. And that said that the staff, which is also 95%, is not true.
Because you got to do something,
Tuen van Erp 1:00:12
Right. Yeah. Yeah. And of course, the best would be to determine your FTP every day and then calculate the TSS every day.
And then you no longer have a professional cyclist. Because you've
Tuen van Erp 1:00:29
Yeah, yeah. And then yeah. And so what eMod Lucia strim and Edward stream are used we took the maximum heart rate and that said based on Seiler I didn't know the percentage out of my head, but okay, seven 70% of the max heart rate below that and run between 70 and 80. So do Above 80 is sentry so that's even more tricky, super broad. TSSTo be honest,
So maximal maximal anchors are to determine zones that are super problematic in themselves. But again, yeah, what are you going to do? We need to know what's going on here. And the best thing to do is just make sure you read through the methods and kind of have a think about what the limitations could be. Yeah. So it's just
While it's on my mind, and I've already had it written down, how many training zones do you use when you're working with athletes now or when you're prescribing?
Tuen van Erp 1:01:30
So I always use the Coggan zones from Training Peaks, but then I combined the last. I used five, so I combined I think they have seven, right? So I combined five, six and seven together. And the reason for it is because they are right, 0.01% and seven, and then I have to make a bar graph, and then there's nothing. Yeah. So that's the reason.
Why are you determining these based on FTP, then when you are setting this?
Tuen van Erp 1:02:06
Yeah. Yeah, we did FTP and then just used the presidents from also from the book from training. Yeah. From training peaks. Yeah. And there's a lot of, yeah, you can discuss this a lot. But at the end, we won a lot of races.
At the end of the day, I win more than you do. So my science is better.
Tuen van Erp 1:02:31
No, I think it's yeah, in this case, as well, you have to know the limitations. Yeah. Yeah,
Exactly. And, it's not a dig at your research at all. Right? It's, it's just to say that, like, we can't expect these things to be perfect. If you want to know anything about this. We're going to have to use methods that aren't perfect.
Yeah. And I also, yes, good to be able to hear that. Like because it can be worrying as a coach, if you've listened to some of the times that we've talked about all these things that are wrong with all of these different training models, or, yeah, ways ways to determine thresholds and this kind of stuff, you can start to think that there's nothing I can do. That's right. But it's sort of good to hear that. Even at the top level of the sport. People use something that they know isn't the best thing out there necessarily, or, or the best thing possible. But it's still really useful. It's still something that you can use to get results out of athletes to measure what they're doing to prescribe training. So it might not be the best thing, but it can still be used effectively.
Yeah, yeah. And actually, he gets into a little bit of a interesting conversation about what the athlete or what the sorry, what the coach can do in this situation is going in terms of measuring maybe training load over the course of the season, right, and doing a lot of that analysis that you're doing, they're actually going to be able to get a better analysis, just because they're doing because they can do regular testing and that type of thing. They're not limited by that.
Yeah. So I've actually seen it where people have based their analyses off of the methods that were in retrospective studies. And I'm like, No, you can actually do it better than what they did in the retrospective study because they're limited by the data set. So yeah, two clicks for example. It kind of gets into an epistemological chicken and egg kind of conversation of, okay, so you want to do a training intensity distribution analysis? Well, if you're working with an athlete, you can do it really well because you could potentially find a really good way to determine LT one, and you could get the best critical power model and have that set up and have. Then following throughout the year and have that analysis just nailed. And then you go to literature and you figure it out, you're like, wow, nobody did it as good as I did.
So one thing I'm thinking about this is down the road, whenever I get to it, is maybe if you are on Wi Fi, you have a dashboard that's set up with all of the different retrospective analyses for training intensity distributions. And you're like, Well, this is from Seiler et al. this year, and I want to compare my athlete with this cohort, then I'm doing the same analysis that they did for that. So you would have like, all these different, instead of just having one training, intensity distribution, if you kind of wanted to compare it to one of the your athletes, what your athlete is doing to one of these other studies, then you could just have a whole dashboard set up with all these different training intensity distribution analyses, from the different retrospective studies.
Tuen van Erp 1:06:03
Yeah, I want to say that I think people also underestimate how difficult it is to do regular testing with pro athletes, compared to with amateurs, which Oh, yeah, yeah, for sure. We were really keen on testing our riders with one of the coaches every two, three months, but it's impossible with the planning and the racing and the changes. And so it's really difficult.
Yeah, I'd also raise data to get difficult. And now I want to say if you want to do a performance test, or turn testing in endurance cycling, should it be fresh after 30 minutes? Or should it be you no longer at least for performances? I think you should also look at when they are fatigued and how they perform that. So yeah, there's a lot of Yeah, discussions, or you can do it. Yeah,
I lean towards fresh because it takes the noise out of it.
Tuen van Erp 1:07:46
So yeah, if you want to determine zones, probably it's the best. But if you want to determine if you're right or improved, I'm now pointing more towards fatigue. Because we saw my PCs shouldn't do one study where they trained for three months, and we did a test that's fresh in the test and then the ride improved a lot on the fatigue test and not so much on the freshers.
That's one of the things I think about as well. It doesn't seem like at some point, a critical power model probably isn't as sensitive as you want it to be. And then an athlete who was really into the testing aspect of it, and you're like, Man, my, my critical power, my or my FTP hasn't gone up in three or four months, and I'm doing all this training, what's going on? And I'm like, well, there's other things that are measurable, or maybe not so measurable. Yeah. Like, how are you feeling? You know, at the end of a three hour ride right now, or at the end of a three hour race? Like you could be completely completely different between two different scenarios with similar thresholds.
Tuen van Erp 1:08:59
I think yeah, so that's what I mean to you. Yeah, so deadly things he didn't improve but he improved really a lot but after three hours of racing, so to risk it's also a limitation of testing fresh in endurance sport.
Damian: And so it continues - another area, another limitation to recognize. And this really feeds my biggest take away from this chat with Tuen. It’s precisely as Tuen said, we have to move forward and not stay stuck in these limitations - which translates to my world as know your shit. Like really know your shit - understand the limitations of what you use - take the information from people like Tuen and his research to know where the limitations of your measures are.
Yes, this is exactly what we do on this show - But don't just stop here - I said at the start we’re on the search for the truth in cycling performance - and that statement may come off as a bit presumptuous, or even cringe - so if you want to say it another way - we are actively looking for limitations of the performance measures and tools we use because the reality is that sometimes the truth is like training load measures (or threshold markers) - they are imperfect measures but if we know them well enough to understand where they start to fall apart, and can explain the limitations to others, and in practice catch the moments where they fail to be reliable or accurate. We can use what we have right now, to make the best decisions possible - to get the best outcome possible.
Outro Indent Start
Dr. Tuen van Erp is a performance scientist with INEOS Grenadiers. Tuen, thank you for joining us and sharing your research and thoughts from working in the trenches. But we’re not done with Tuen yet. In the next episode - part 2 of this interview where we get into How are professional cyclists training? And then what are the demands during races? A fascinating insight into both mens and womens professional cycling demands.
While you wait for that one check out our last show. We chat with Dr. Elisabetta Borgia- sports psychologist for Trek-Segafredo and coordinator of mental support for the Italian Cycling Federation- we discuss the importance of a cyclist’s emotions when pursuing peak performance in the sport. To best facilitate the exploration of this topic we talked with Sport Psychologist Dr. Elisabetta Borgia. Dr. Borgia works for Trek-Segafredo's men and women teams and is the official team psychologist for the Italian Cycling Federation. Full-time sport psychologists are one of the newest additions to the pro cycling team performance staff roster so we were very excited to hear about her role and experiences working with these athletes.
We also take a look at the details of how a specific type of therapy emerging in this space- dialectical behavioural therapy (DBT) can apply to the performance of athletes of all levels.
It’s up now where you got this one.
If you learnt something new in this podcast please share it and subscribe to or follow us in whatever app you use to listen to your podcast. Go ahead you can find the button in that app, it’s probably a heart or it says follow or subscribe. It’s in there somewhere, go ahead and click it right now while you're listening.
And finally, check out the link to discuss further in the show notes if you have any questions about today’s episode or want to ask Tuen or any of us a question.
And with that, thanks for listening.