We’ve tested this in our production environment on mobile robots (think quadcopter and ground UGV) and it works really nicely
bevenky · 8h ago
Is this OSS?
fc417fc802 · 8h ago
Unclear exactly what you're asking. The linked paper describes an algorithm (patent status unclear). That paper happens to link to a GPL licensed implementation whose authors explicitly solicit business licensing inquiries. The related model weights are available on Hugging Face (license unclear). Notably the HF readme file contains conflicting claims. The metadata block specifies apache while the body specifies GPL.
The paper says it is based on YOLOv8, which uses the even stricter AGPL-3.0. That means you can use it commercially, but all derived code (even in a cloud service) must be made open source as well.
kouteiheika · 6h ago
They probably mean the algorithm, but nevertheless the YOLO models are relatively simple so if you know what you're doing it's pretty easy to reimplement them from scratch and avoid the AGPL license for code. I did so once for the YOLOv11 model myself, so I assume any researcher worth their salt would also be able to do so too if they wanted to commercialize a similar architecture.
fc417fc802 · 7h ago
I assume they refer to the academic basis for the algorithm rather than the implementation itself.
Slightly unrelated, how does AGPL work when applied to model weights? It seems plausible that a service could be structured to have pluggable models on the backend. Would that be sufficient to avoid triggering it?
jimmydoe · 6h ago
Does GPL still mean anything if you can ask AI to read from code A and reimplement into code B?
fc417fc802 · 6h ago
The standard for humans is a clean room reimplementation so I guess you'd need 2 AIs, one to translate A into a list of requirements and one to translate that list back into code.
But honestly by the time AI is proficiently writing large quantities of code reliably and without human intervention it's unclear how much significance human labor in general will have. Software licensing is the least of our concerns.
msgodel · 6h ago
If that's legal then copyright is meaningless which was the original intention of the GPL.
Now that we've seen the use of drones in the Ukraine war, 10k+ drone light shows, Waymo's autonomous cars, and tons of AI advancements in signals processing and planning, this seems obvious.
AndrewKemendo · 5h ago
You’re right to be terrified
jiggawatts · 8h ago
The truly scary part is that it’s a straightforward evolution from this to 1000 fps hyperspectral sensors.
There will be no hiding from these things and no possibility of evasion.
They’ll have agility exceeding champion drone pilots and be too small to even see or hear until it’s far too late.
Life in the Donbass trenches is already hell. We’ll find a way to make it worse.
ed · 10h ago
Neat. Wonder how this compares to Segment Anything (SAM), which also does zero-shot segmentation and performs pretty well in my experience.
RugnirViking · 9h ago
YOLO is way faster. We used to run both, with YOLO finding candidate bounding boxes and SAM segmenting just those.
For what it's worth, YOLO has been a standard in image processing for ages at this point, with dozens of variations on the algorithm (yolov3, yolov5, yolov6, etc) and this is yet another new one. Looks great tho
SAM wouldn't run under 1000ms per frame for most reasonable image sizes
We used mobile Sam because of this, was about 250ms on cpu. Useful for our use case
ipsum2 · 9h ago
SAM doesn't do open vocabulary i.e. it segments things without knowing the name of the object, so you can't ask it to do "highlight the grapes", you have to give it an example of a grape first.
This uses GroundingDINO for open vocabulary, separate model. Useful nonetheless, but means you're running a lot of model inference for a single image.
greesil · 8h ago
I've got big plans for this for an automated geese scaring system
mattlondon · 6m ago
Same here but for urban foxes.
We had motion triggered sprinklers that worked great, but they did not differentiate between foxes and 4 year old children if I forgot to turn them off haha.
We have more or less 360 degrees CCTV coverage of the garden via 7 or 8 CCTV cameras so rough plan is to have basic motion pixel detection to detect frames with something happening then fire those frames off for inference (rather than trying to stream all video feeds through the algorithm 24/7) and turn the sprinklers on. Hope to get to about 500ms end-to-end latency from detection to sprinklers activated to cement the "causality" of stepping into the garden and then ~immediately getting soaked and scared in the foxes brains.
zachflower · 7h ago
Funnily enough, that was my computer science capstone project back in 2010!
I don’t know if our project sponsor ever got the company off the ground, but the basic idea was an automated system to scare geese off of golf courses without also activating in the middle of someone’s backswing.
greesil · 6h ago
If someone can sell it for $100 they'd make some serious money. The birds are fouling my pool, and the plastic owl does nothing. Right now I'm thinking it should make a loud noise, or launch a tennis ball randomly. The best part is I can have it disarm if it sees a person.
silentsea90 · 9h ago
Q. Any of you know models that do well at deleting objects from an image i.e. inpainting with mask with intention to replace mask with background? Whatever I've tried so far leaves a smudge (eg. LaMa)
GaggiX · 9h ago
There are plenty of Stable Diffusion based models that are capable of inpainting, of course they are heavier to run than LaMa.
silentsea90 · 7h ago
My question wasn't about inpainting but eraser inpainting models. Most inpainting models replace objects instead of erasing them even though the prompt shares an intent to delete
jokethrowaway · 6h ago
You can build a pipeline where you use:
GroundingDino (description to object detection) -> SAM (segmenting) -> Stable Diffusion model (inpainting, I do mainly real photo so I like to start with realisticVisionV60B1_v51HyperVAE-inpainting and then swap if I have some special use case)
For higher quality at a higher cost of VRAM, you can also use Flux.1 Fill to do inpainting.
Lastly, Flux.1 Kontext [dev] is going to be released soon and it promises to replace the entire flow (and with better prompt understanding). HN thread here: https://news.ycombinator.com/item?id=44128322
saithound · 6h ago
Needs (2024) in the title.
pavl · 9h ago
This looks so good! Will it be available on replicate?
jimmydoe · 6h ago
this is one year old. wonder why post now.
serf · 7h ago
not to be a grump, but why was this posted recently? Has something changed? Yolo-world has been around for a bit now.
3vidence · 5h ago
The setback of YOLO architectures is that they use predefined object categories that are a part of the training process. If you want to adapt YOLO to a new domain you need to retrain it with your new category label.
This work presents a version of YOLO that can work on new categories without needing to retrain the algorithm, but instead having a real-time "dictionary" of examples that you can seemlessly update. Seems like a very useful algorithm to me.
Edit: apologies i misread your comment I thought it was asking why this is different that regular YOLO
https://github.com/AILab-CVC/YOLO-World
https://huggingface.co/spaces/stevengrove/YOLO-World/tree/ma...
Slightly unrelated, how does AGPL work when applied to model weights? It seems plausible that a service could be structured to have pluggable models on the backend. Would that be sufficient to avoid triggering it?
But honestly by the time AI is proficiently writing large quantities of code reliably and without human intervention it's unclear how much significance human labor in general will have. Software licensing is the least of our concerns.
https://www.youtube.com/watch?v=HipTO_7mUOw
Now that we've seen the use of drones in the Ukraine war, 10k+ drone light shows, Waymo's autonomous cars, and tons of AI advancements in signals processing and planning, this seems obvious.
There will be no hiding from these things and no possibility of evasion.
They’ll have agility exceeding champion drone pilots and be too small to even see or hear until it’s far too late.
Life in the Donbass trenches is already hell. We’ll find a way to make it worse.
For what it's worth, YOLO has been a standard in image processing for ages at this point, with dozens of variations on the algorithm (yolov3, yolov5, yolov6, etc) and this is yet another new one. Looks great tho
SAM wouldn't run under 1000ms per frame for most reasonable image sizes
We had motion triggered sprinklers that worked great, but they did not differentiate between foxes and 4 year old children if I forgot to turn them off haha.
We have more or less 360 degrees CCTV coverage of the garden via 7 or 8 CCTV cameras so rough plan is to have basic motion pixel detection to detect frames with something happening then fire those frames off for inference (rather than trying to stream all video feeds through the algorithm 24/7) and turn the sprinklers on. Hope to get to about 500ms end-to-end latency from detection to sprinklers activated to cement the "causality" of stepping into the garden and then ~immediately getting soaked and scared in the foxes brains.
I don’t know if our project sponsor ever got the company off the ground, but the basic idea was an automated system to scare geese off of golf courses without also activating in the middle of someone’s backswing.
For higher quality at a higher cost of VRAM, you can also use Flux.1 Fill to do inpainting.
Lastly, Flux.1 Kontext [dev] is going to be released soon and it promises to replace the entire flow (and with better prompt understanding). HN thread here: https://news.ycombinator.com/item?id=44128322
This work presents a version of YOLO that can work on new categories without needing to retrain the algorithm, but instead having a real-time "dictionary" of examples that you can seemlessly update. Seems like a very useful algorithm to me.
Edit: apologies i misread your comment I thought it was asking why this is different that regular YOLO