Using ‘flyability’ in selecting paragliding competitions.

Another Canadian National Paragliding Competition draws to a close with eerily familiar results – few tasks and those that were flown were of poor validity. A visiting pilot queried a few weeks back if it was worthwhile to attend this years event in Yamaska, PQ. Answering the pilot was a litany of ‘hmm’, ‘well’, and ‘depends’. The unspoken answer was, ‘If you are looking to fly a lot, consider alternatives’.

The local community may know that a site will be a question mark at a particular time of year. But for those overseas trying to plan summer flying trips things are a little trickier, especially if no locals have flown the site.

There must be a way for someone to be able to go back through the data collected during previous paragliding competitions to be able to do a rough (very rough at that) comparison of which comps tend to have more flyable days and better quality tasks.


Determining if a task is held is fairly easy looking through the score sheets. Determining the task validity (a guide to the task quality) is a bit trickier if the event organizers do not publish the validity score for the day. A reasonable guesstimate is to look at the top score for the day. A task with a perfect validity factor (1.0) and is flown by the best pilot should result in a score of 1000 points. Even with a perfect validity factor, the top score for the day may not be 1000 due to assignment of lead out points. But overall, the top pilot should end up in the high 900’s.

So knowing a no task day nets a top pilot score of 0 and a perfect day should end up close to 1000, we can aggregate the top score out of 1000 for each task and divide by the number of days allotted to the comp.

From this we can now compare paragliding competitions based on how much flying is done and how well matched the task is to the conditions of the day and the pilots involved – ultimately what I call ‘flyability’.

Now these numbers are by no means absolutes – weather happens and even a bone dry desert can see thunderstorms so judging a comp based on a single year will do us little good. But trends can be gleemed over multiple years.

So how do things stack up?

Notice any trends?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s