Skip to content
📊data-driven

Forecasting Under Fire: Week 1

  • forecasting
  • bayesian
  • experiments
0:000:00

Goldman changed their oil forecast four times in six days last week. Four times. I want to know whether that's new information or noise management. So I'm running an experiment.

Eight weeks. Every Sunday I publish a point estimate for Brent crude seven days out, confidence intervals at 68% and 95%, and the scenario probabilities that generated them. The model assigns weights to five scenarios (Escalation, Stalemate, De-escalation, Demand Destruction, Black Swan) and the forecast is the probability-weighted outcome. Full methodology here. All of it public from day one. If you want to see how the numbers move in real time, the live dashboard is here.

I should be honest about why this is terrifying. I'm not an oil analyst. I have no proprietary data. What I do have is a willingness to be wrong in public and document exactly how I got there. If I score worse than a naive baseline (literally just using last week's price, no model at all) I'll say so. No smoothing, no framing the errors away. The whole point is to find out what I actually believe versus what I'm performing.

Week 1 opens with Escalation at 45%. Not because I want it to be, but because that's where the evidence points and anchoring to hope is a terrible way to start a forecasting experiment.

Week 1 Forecast

Published 15 March 2026 at 13:03 UTC. Target date 20 March.

  • Point Estimate: $120.16
  • 68% Confidence Interval: $85.11 – $138.08
  • 90% Confidence Interval: $71.21 – $145.15
  • Current Price (Brent): $98.91

Scenario Probabilities:

  • Escalation: 45%
  • Stalemate: 25%
  • De-escalation: 15%
  • Demand Destruction: 10%
  • Black Swan: 5%

The data lives on GitHub. If you think my scenario weightings are wrong, run your own. That's the point.