This article covers how ratings are calculated & how to report inaccurate ratings.
Surfline previously only rated surf quality where we had expert human forecasters making observations.
Our data science team uses 35 years and hundreds of thousands of Surfline observations to train a machine learning system to tell the difference between poor surf and good surf. We deliver this information for every hour for every spot we forecast for. This gives a really clear idea at a glance of the kind of surf quality you can expect.
The result? We now show ratings at every spot we issue a forecast for.
How do ratings work?
Surfline has 7 ratings, ranging from VERY POOR to EPIC.
Why did we do this? By popular request! We hope this tool will bring Surfline to life, and help you identify the windows in which promising swell is met with favourable winds at the spot.
The biggest advancement, however, is behind the scenes. Rather than simply commenting on model output, our forecasters now have tools to train and influence our own proprietary wave model.
Our expert forecasters actively correct surf height readings & ratings up to 7 days into the future. They influence the 6 am, 12 pm and 6 pm slots and the model corrects everything else accordingly.
But don't models "have their moments"?! The highest rating that can be automatically applied is Fair to Good. Only our forecasters can make the "Good" and "Epic" calls! We also make sure no bogus ratings appear between 2 consecutive human calls, i.e. if a human said it was FAIR at 6 am, and then FAIR again at 12 pm
How are automated ratings calculated? Machine learning is used to combine years of human-backed surf ratings, with our proprietary global wave model, LOTUS. Right now we're crunching wave height, wind direction and wind speed. As we continue to develop this rating, more components will be introduced. Namely, tides, swell period and swell direction.
How can ratings be improved? To account for the unique nature of each spot, we introduce "best possible" limits for the components that inform ratings. At spots where humans have been directly issuing ratings, "best possible" limits will be mostly accounted for through the "learning" process itself.
Where we haven't had human forecasters, we have a generic machine-learning model using global logic. Your feedback & experience with a location can help us improve the ratings' accuracy for your local forecasts.
Biggest difficulties? Before this update, optimal swell/wind conditions were highlighted with yellow bars. The yellow bars were more discreet, so small errors generally went unnoticed. Now our optimal swell/wind data directly informs the ratings. Where the offshore/onshore direction we've assigned is anything short of spot on, a Fair to Good can become Very Poor quite easily.
For spots that had limited (or no) ratings directly applied by our forecasters, there remain an innumerous number of swell/wind combinations that a human hasn't yet assigned a rating for. The automated rating won't be as accurate in these instances! Your feedback could be of great use to help us train the model.
How to report an issue with the forecast/rating?
Submit a request! Be sure to select "Forecasting Accuracy" under the Request Type so your message reaches the right place.
What do we need? The machine learns from a snapshot of conditions and a rating. Therefore, we can use your feedback in a similar way.
- A screenshot of the forecast, to help us see the conditions informing the rating you're reading.
- We need to know the exact date/time you're checking the surf in order to ingest it. Without this, there is very little we can do.
- Some kind of supporting evidence will fast-track your feedback and help us use it reliably. Example: video, photo taken at spot, cam screenshot, Rewind clip, Sessions clip...
Ingestion of inaccurate information would be counterproductive, so bear with our support staff while they confirm all the necessary specifics before logging your feedback with the forecast engineers.
Article is closed for comments.