Iirc, at the last Olympics, Omega paired a high-frequency linear display with their finish-line strip cameras. Regular cameras saw a flashing line, but the backdrop to photo finishes was an Omega logo. Very subtle, but impressive to pull off.
syntaxing · 13m ago
Fun read! I used to work in sensor calibration, and most people take for granted how much engineering went into having phones taking good photos. There’s a nontrivial amount of math and computational photography that goes into the modern phone camera
IMO the denoising looks rather unnatural and emphasizes the remaining artifacts, especially color fringe around details. Personally I'd leave that turned off. Also, with respect to the demosaic step, I wonder if it's possible to implement a version of RCD [1] for improved resolution without the artifacts that seem to result from the current process.
Yeah I actually have it disabled by default since it makes the horizontal stripes more obvious and it's also extremely slow. Also, I found that my vertical stripe correction doesn't work in all cases and sometimes introduces more stripes. Lots more work to do.
As for RCD demosaicing, that's my next step. The color fringing is due to the naive linear interpolation for the red and blue channels. But, with the RCD strategy, if we consider that the green channel has full coverage of the image, we could use it as a guide to make interpolation better.
Cloudef · 1h ago
Yeah, i dont think the denoised result looks that good either
anonu · 49m ago
Super cool. I wonder if you could re-use a regular 2-d CMOS digital camera sensor to the same effect. But now I realize your sensor is basically 1-D and has a 95khz sampling rate. At the same rate with a 4k sensor you'd have way too much data to store and would need to throw most of it away.
Must be somewhat interesting deciding on the background content, too.
GlibMonkeyDeath · 3h ago
If you like this sort of thing, check out https://www.magyaradam.com/wp/ too. A lot of his work uses a line scan camera.
JKCalhoun · 3h ago
The video [https://www.magyaradam.com/wp/?page_id=806] blew my mind. I can only image he reconstructed the video by first reconstructing one frame's worth of slits — then shifting them over by one column and adding the next slit data.
card_zero · 2h ago
It's neat that it captured the shadow of the subway train, too, which arrived just ahead of the train itself. This virtual shadow is thrown against a sort of extruded tube with the profile of the slice of track and wall that the slit was pointed at.
its-summertime · 2h ago
> Hmm, I think my speed estimation still isn’t perfect. It could be off by about 10%.
Probably would be worth asking a train driver about this, e.g. "what is a place with smooth track and constant speed"
tecleandor · 1h ago
Maybe an optical flow sensor to estimate speed in real time?
lttlrck · 1h ago
They have an amazing painterly quality. I'm not a huge train fan but I'd put some of these on my wall.
chrisjune · 1h ago
How to receive order using panda rider app
Retr0id · 58m ago
reading this is how I imagine it feels to be chatgpt
ncruces · 2h ago
That's a lot more than I thought I'd want to know about this, but I was totally nerd sniped. Great writeup.
j_bum · 4h ago
What a beautiful example of image processing. Great post
whartung · 3h ago
These are amazing images. I don't understand what's going on here, but I do like the images.
Etheryte · 3h ago
Imagine a camera that only takes pictures one pixel wide. Now make it take a picture, for example, 60 times a second and append every pixel-wide image together in order. This is what's happening here, it's a bunch of one pixel wide images ordered by time. The background stays still as it's always the same area captured by that one pixel, resulting in the lines, but moving objects end up looking correct as they're spread out over time.
At first, I thought this explanation would make sense, but then I read back what I just wrote and I'm not sure it really does. Sorry about that.
JKCalhoun · 3h ago
Yeah, like walking past a door that's cracked just a bit so you can see into an office only a slit. Now reconstruct the whole office from that traveling slit that you saw.
Very cool.
whartung · 2h ago
No, thank you. This was perfect. It completely explains where the train comes from and where the lines come from.
Lightbulb on.
Aha achieved. (Don’t you love Aha? I love Aha.)
kiddico · 3h ago
It made sense to me!
jeffbee · 4h ago
Okay I was stumped about how this works because it's not explained, as far as I can tell. But I guess the sensor array has its long axis perpendicular to the direction the train is traveling.
You can also get close in software. Record some video while walking past a row of shops. Use ffmpeg to explode the video into individual frames. Extract column 0 from every frame, and combine them into a single image, appending each extracted column to the right-hand-side of your output image. You'll end up with something far less accurate than the images in this post, but still fun. Also interesting to try scenes from movies. This technique maps time onto space in interesting ways.
eschneider · 3h ago
You use a single vertical line of sensors and resample "continuously". When doing this with film, the aperture is a vertical slit and you continuously advance the film during the exposure.
For "finish line" cameras, the slit is located at the finish line and you start pulling film when the horses approach. Since the exposure is continuous, you never miss the exact moment of the finish.
miladyincontrol · 3h ago
Line scan sensors are basically just scanners, heck people make em out of scanners .
Usually the issue is they need rather still subjects, but in this case rather than the sensor doing a scanning sweep they're just capturing the subject as it moves by, keeping the background pixels static.
krackers · 1h ago
It only works for trains because the image of train at t+1 is basically image of train at time t shifted over by a few pixels, right? It doesn't seem like this would work to capture a picture of a human, since humans don't just rigidly translate in space as they move.
makeitdouble · 1h ago
If the human is running and doesn't frantically shake it decently works. There's samples of horse race finishing line pics in the article, and they look pretty good IMHO.
It falls apart when the subject is either static or moves it's limbs faster than the speed the whole subject moves (e.g. fist bumping while slowly walking past the camera would screw it)
Thanks, I added a section called "Principle of operation" to explain how it works.
blooalien · 3h ago
Absolutely fascinating stuff! Thank you so much for adding detailed explanations of the math involved and your process. Always wondered how it worked but never bothered to look it up until today. Reading your page pushed it beyond idle curiosity for me. Thanks for that. And thanks also to HN for always surfacing truly interesting reading material on a daily basis!
ansgri · 3h ago
What's your FPS/LPS in this setup? I've experimented with similar imaging with an ordinary camera, but LPS was limiting, and I know line-scan machine vision cameras can output some amazing numbers, like 50k+ LPS.
[1] https://github.com/LuisSR/RCD-Demosaicing
As for RCD demosaicing, that's my next step. The color fringing is due to the naive linear interpolation for the red and blue channels. But, with the RCD strategy, if we consider that the green channel has full coverage of the image, we could use it as a guide to make interpolation better.
Must be somewhat interesting deciding on the background content, too.
Probably would be worth asking a train driver about this, e.g. "what is a place with smooth track and constant speed"
At first, I thought this explanation would make sense, but then I read back what I just wrote and I'm not sure it really does. Sorry about that.
Very cool.
Lightbulb on.
Aha achieved. (Don’t you love Aha? I love Aha.)
You can also get close in software. Record some video while walking past a row of shops. Use ffmpeg to explode the video into individual frames. Extract column 0 from every frame, and combine them into a single image, appending each extracted column to the right-hand-side of your output image. You'll end up with something far less accurate than the images in this post, but still fun. Also interesting to try scenes from movies. This technique maps time onto space in interesting ways.
For "finish line" cameras, the slit is located at the finish line and you start pulling film when the horses approach. Since the exposure is continuous, you never miss the exact moment of the finish.
Usually the issue is they need rather still subjects, but in this case rather than the sensor doing a scanning sweep they're just capturing the subject as it moves by, keeping the background pixels static.
It falls apart when the subject is either static or moves it's limbs faster than the speed the whole subject moves (e.g. fist bumping while slowly walking past the camera would screw it)
https://en.wikipedia.org/wiki/Slit-scan_photography#/media/F...