The whole README is heavily AI-edited (the final output is all by AI), and the worst thing is that the image diagrams seem to be generated (likely with 4o), see for example
That repo has more github badges that a north Korean general has metals on their uniform...
voxmatt · 2h ago
lol!
judge123 · 2h ago
I'm dying to know though, what's the practical resolution like? Can it tell the difference between my cat and a bag I dropped, or is it more like "a blob moved over there"?
phoenixhaber · 1h ago
Ok let's say I'm making a robot spider in my garage half the size of a Tesla and with as much horsepower. I'm putting nvidias new Jetson brain as the chip. If I use enough of these can I replace a lidar package for autonomous control?
throw10920 · 1h ago
On one hand, the potential privacy invasions enabled by this technology (e.g. Xfinity (of course Comcast) a few months ago[1]) are pretty scary.
On the other hand, the technology seems potentially extremely useful. I've had an interest in pose estimation for many years, but doing it with normal cameras seems tricky to do reliably because of the possibility for visual occlusion (both from the body itself and from other objects). I'm curious to see if I can use this for something like tracking my posture while I use my computer so I can avoid back pain later in life.
what i want to know is if you need multiple senders and receivers, or you just run it on a esp32 and it can visualize? usually they need a sender and a receiver to make sense of it all?
ramity · 1h ago
I didn't see any reference to a sender or actively blasting RF from the same access point. I think the approach relies on other signal sources creating reflections to a passively monitoring access point and attempting to make sense of that.
eig · 2h ago
Seems like it is based on this paper from CVPR 2024:
Frankly I'm shocked it's possible to do this with that level of resolution.
ramity · 2h ago
5GHz WiFi has a wavelength of ~6cm and 2.4GHz ~12.5cm. Anything achieving smaller is a result of interferometry or a non WiFi signal. Mentioning this might not add much substance to the conversation, but it felt worth adding.
echelon · 2h ago
I scrolled through two pages of badges and hit counters. I have to be honest, that makes me very scared to run the underlying code.
This is what 1998 felt like.
keyle · 1h ago
Github is the new geocities!
int0x29 · 2h ago
The UI looks like it was built by a Hollywood set designer
ramity · 2h ago
I'm interested but am also incredibly dubious. Not because it seems impossible but the opposite. On one hand, an open source repo like this making an approach for hackable extension should be praised, but the "Why Built WiFi-3D-Fusion" section[0] gives me very, very bad vibes. Here's some excerpts I especially take issue with:
> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."
> "I refuse to accept 'impossible.'"
WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.
I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.
https://github.com/MaliosDark/wifi-3d-fusion/blob/main/docs/...
"Wayelet CSi tensas"
That makes me question the authenticity of the project.
We built this system at the UofT WIRLab back in 2018-19 https://youtu.be/lTOUBUhC0Cg
And link to paper https://arxiv.org/pdf/2001.05842
On the other hand, the technology seems potentially extremely useful. I've had an interest in pose estimation for many years, but doing it with normal cameras seems tricky to do reliably because of the possibility for visual occlusion (both from the body itself and from other objects). I'm curious to see if I can use this for something like tracking my posture while I use my computer so I can avoid back pain later in life.
[1] https://news.ycombinator.com/item?id=44426726
https://aiotgroup.github.io/Person-in-WiFi-3D/
Frankly I'm shocked it's possible to do this with that level of resolution.
This is what 1998 felt like.
> "Why? Because there are places where cameras fail, dark rooms, burning buildings, collapsed tunnels, deep underground. And in those places, a system like this could mean the difference between life and death."
> "I refuse to accept 'impossible.'"
WiFi sensing is an established research domain that has long struggled with line of sight requirements, signal reflection, interference, etc. This repo has the guise of research, but it seems to omit the work of the field it resides in. It's one thing to detect motion or approximately track a connected device through space, but "burning buildings, collapsed tunnels, deep underground" are exactly the kind of non-standardized environments where WiFi sensing performs especially poorly.
I hate to judge so quickly based on a readme, but I'm not personally interested in digging deeper or spinning up an environment. Consider this before aligning with my sentiment.
[0] https://github.com/MaliosDark/wifi-3d-fusion/blob/main/READM...