Is There Bias in the Forecast, Part II

Last week's main question was Is There Bias in the Forecast?
It brought about a flurry of responses; my favorite being, "Is This a Trick Question?"
And this naturally led to more questions:
- What can be done to reduce bias and improve consistency among forecasters?
- What should the gentle reader (or consumer) of the forecast do with the knowledge that there is bias in the forecast?

~~~~~~~~~~~
A few thoughts on the first question - What can be done to reduce bias and improve consistency among forecasters?
A forecaster colleague of mine in Colorado insightfully asked, "Are the differences in danger ratings due to forecasters interpreting snowpack and weather differently or are they interpreting this stuff the same but interpreting the danger scale differently?"

"We", the avalanche centers, have guidance and definitions, and yet there are some architectural questions that exist:
- Do the forecasts convey more certainty than they should?
- Are the forecasts too granular? In other words, are forecasters trading accuracy for precision? (In regards to a 24-petaled danger rose, up to 9 avalanche problems, 1-5 likelihood scale, 1-5 size scale and 24 petaled locator rose...The more precise we try to be, the more likely we will be wrong (Is that new wind slab on a mid-elevation southeast facing slope size 1 and touchy?).
Maybe. But in my limited data set (you saw one exercise last week)...I found the biggest spread among (1) Likelihood and (2) in ways to represent location on the various problems' locator roses.
What I do know is that in response to last week's Meditation on Bias, I received two emails that suggested that machine-learning algorithms might help eliminate the bias...and led to this Aha moment for me:
If Intuition is nothing more than Pattern Recognition, then Artificial Intelligence has Intuition in spades..
This is not so much an argument of machine instead of mind (man) per se, but possibly one of machine AND mind (man). Recently I was introduced to an avalanche decision making "app" that I felt held some promise. We shall see...

~~~~~~~~~~~~

A few thoughts on the second question - What should the gentle reader (or consumer) of the forecast do with the knowledge that there is bias in the forecast?
When I came back from the rabbit-hole that was On the Nature of Forecasting, and Why We Get it Wrong, I came away with a few takeaways for the public:
- The Forecast is Guidance Not Gospel - The forecast must be verified
- Choose Terrain in Case you or the forecaster is wrong.
- Appreciate Uncertainty (especially with the Extra Caution (PWL) problems)-
- Remember that Forecast is only ONE of ALPTRUTH
My colleague in Colorado had this to say:
The forecast can be wrong or just different in the place that you are travelling. If you can't interpret the snowpack and avalanche hazard for yourself, stay out of avalanche terrain.

Special thanks to Patrick Fink, Gary Kuehn, and Jason Konigsberg.