The Pipeline
Every 10 minutes, an AWS Lambda function fetches the latest panorama from the Space Needle webcam and crops it down to the region where Mount Rainier appears. That cropped image goes through two classification models in sequence:
Frame State
Is this image usable? The camera can drift off-target, there may be heavy blur, or it can simply be dark out. This model filters those cases so the visibility model only sees clean images.
Visibility
If the frame is good, can we see the mountain? The model classifies whether Rainier is fully visible, partially obscured by clouds, or completely hidden.
The results, along with the cropped image, are stored in S3 and served through CloudFront. The website pulls the latest data and displays it on the homepage.
The Alignment Problem
The webcam is fixed to the top of the Space Needle, but the panorama's directional alignment is not always perfect and can be shifted, sometimes significantly. The orange box in these images shows where the visibility model's mountain-specific crop region falls. You can see that by default, Mount Rainier falls well outside of the tighter mountain crop box:
Earlier versions used the wide display crop for visibility training rather than a narrow mountain-specific crop, which meant the visibility model was learning from the surrounding sky and cityscape, not just the mountain itself. The current version corrects for camera drift by aligning each panorama against a reference image before extracting a tight crop around Mount Rainier.
The impact was significant: of roughly 9,400 images previously classified as "Off-Target" by the old model, alignment recovered over 80% of them. Nearly 1,800 of those turned out to have the mountain visible.
FAQ
How were the models trained?
By hand! Thousands of images were manually labeled across both frame state and visibility categories. I built an admin rapid labeling page to speed this up, which allows me to use hotkeys and background label submission. It also lets me filter images by the previous model confidence, classification, and other qualities so I can target the images where the model is least sure and refine the training data iteratively for greater accuracy.
Why are there two classifiers instead of one?
The webcam doesn't always produce a clean image. The camera can point off-target, there may be too much fog to see, or it can simply be dark out. The frame-state classifier catches these cases first so the visibility model only runs on images where a meaningful classification is possible. Without this step, a dark nighttime image might be classified as "not visible" when really there's just nothing to see.
The image isn't updating! Is the tracker broken?
Probably not. The webcam source often publishes images every 20 minutes instead of every 10. It also occasionally goes down, perhaps for maintenance or technical issues, and the tracker can only process what's available. If the latest image is more than 10–20 minutes old, the camera is likely temporarily offline. The tracker will pick back up automatically when new images appear.
About Me
I'm Jacob Knight, a Software Development Engineer based in Seattle with experience in distributed systems. This was a side project to explore ML, built during the Big Dark when I needed something to look forward to. You can find me on LinkedIn.