I just wanted to add a comment here. I have been reached out to by multiple members of the Channel9 Storm Team to discuss this all further. Additionally, they posted a second post on their Facebook page to give greater clarification to their statement. This is why I started my blog post below praising the Storm Team meteorologists and expressing my great respect for them!
I've considered taking this blog down all together based on the conversations I've had with the Storm Team. I've decided to leave it up, however, because I do feel that public interpretation of forecasts is a crucial topic. I think this whole situation can shed some light on the topic for meteorologists and the public alike. Forecasting is difficult enough. Expressing the forecast to the public is another thing entirely.
Original Post:
Every once in a while, I come across a forecast that, quite honestly, makes my skin crawl. I try not to be judgmental about them- I've made plenty of bone-headed forecasts myself. At the same time though, sometimes something just needs to be said.
Let me preface my forthcoming rant with the following: I have a great deal of respect for the meteorologists at News Channel 9. I have met a number of them and they are great people. Their forecasts are typically very good and they are extremely intelligent. They do a great job.
Just not this time. This isn't meant to be an attack on them in any way, but the record needs to be set straight. It is February 17th. Historically, we could see significant snow well into April. (see April 16-17, 2007 for example). I cannot fathom how anyone can declare winter over at this stage, but that is what Channel 9 did:
Now, there are three possibilities for what can actually happen. We can be above average, below average, or right at normal. So in the white, "equal chance" areas, the probabilities of each of those three outcomes is about 33%. Each shade of orange/blue represents a greater chance of seeing warmer/colder temperatures. The Climate Prediction Center labels each shade with a percent value. I have marked the 40% line, as it was a bit hard to see. Areas inside that 40% line, that is in the darker orange shade, have a 40% or greater chance of seeing above normal temperatures. The lighter orange shade, which includes Central New York, is predicted to have a 33-40% chance of seeing above normal temperatures. Given that the 33% line is very close to us, it can be assumed that it is more like a 33-35% chance.
That means we have about a 2% chance more than normal of seeing above normal temperatures.
Since the chances of a warm March, April and May are up by about 2%, and probabilities must add up to 100%, we can assume that the probabilities of a colder than normal March, April and May are running at about 31% and a normal period at 33%. Let's look at this forecast in another way:
Honestly, can you see much of a difference? Yes, we have a higher chance of above normal temperatures than anything else. But that increased chance is such a minor increase. If these were medical test results, would we want our doctors declaring definitely that the test was conclusive one way or another with a 35-33-31 distribution? I think not.
The other thing to consider is how accurate these probability forecasts are in the first place. The following as a graphic the Climate Prediction Center publishes along with their seasonal forecasts. It looks confusing, but I will explain it:
First, some background. There are numerous ways to calculate forecast accuracy. The Heidke Skill Score (HSS) is just one of those ways. A HSS of 100 means the forecast was perfect. A HSS of -50 is the worst possible forecast, and a HSS of 0 means the forecast was as good as a random guess.
We are looking at the red lines, which are labeled 'non-CL'. These 'non-CL' lines are the skill of forecasts where either colder than normal or warmer than normal forecasts are made. The year-to-year changes between these red lines are huge. You will notice a horizontal red line at 20. This is the average HSS for non-CL predictions from 1995-2008. The score is 20. Remember, a perfect forecast is 100 and a random guess is 0. A 20 is an extremely poor score and shows just how difficult it is to predict temperatures for the next three months.
Putting these two things together, we have a very slim chance of warmer weather based on a forecast that is much closer to a random guess than a perfect forecast.
With this knowledge, can you see why this upsets me? Meteorologists have an unspoken contract with the public to give accurate, informative forecasts. We have to do this to build trust between the meteorologist and the public. Why? Because when a tornado is bearing down on your neighborhood, or a major snow storm is on the way, or anything else that threatens lives and property, we need people to listen to us. Way too often, meteorologists, particularly in the media, betray this trust with garbage statements like this.
It needs to stop.
Now, I commented on that Facebook post and the response I got was "our forecasts are based on more than just what we are showing here- this graphic just happens to sum things up nicely." My argument is that 1) the graphic really shows nothing, as I explained previously and 2) there is no expression of uncertainty or these other factors. All that is said is 'winter is over.' I'm not against long-range forecasting. I am against long-range forecasts that don't express the inherent extreme uncertainty that they contain. And this is one of the worst offenders I've seen.
Might they end up being right? Sure. They might. If they are, was it skill or luck? I'll let you be the judge.