Top
Best
New

Posted by lukeinator42 11/19/2025

Meta Segment Anything Model 3(ai.meta.com)
692 points | 134 comments
cebert 11/20/2025|
I’m thankful that Meta still contributes to open source and shares models like this. I know there’s several reasons to not like the company, but actions like this are much appreciated and benefit everyone.
cheschire 11/20/2025||
Does everyone forget 2023 when someone leaked the llama weights to 4chan?? Then meta started issuing takedowns on the leaks trying to stop it.

Meta took the open path because their initial foray into AI was compromised so they have been doing their best to kneecap everyone else since then.

I like the result but let’s not pretend it’s for gracious intent.

prodigycorp 11/20/2025|||
Wait a minute. I'm no Meta fan, but that leak wasn't internal. llama released their weights to researchers first. The leak was from the initial batch of users, not from inside of Meta. iirc, the model was never meant to be closed weight.
poutrathor 11/20/2025||
I agree How can the previous comment be on hacker news ? Every one here has followed the llama release saga. The famous cheeky PR on their GitHub with the torrent link was genius comedy.
zamadatix 11/20/2025||||
This might make sense for explaining n=1 releases of Llama being open weight. Even OpenAI started with open weight models and moved to closed weight though, so why would this have forever locked Meta into releasing all models as open weight and across so many model families if they weren't really interested in that path as a strategy in its own right?
deadbabe 11/20/2025|||
There is so much malice in the world, let’s just pretend for once it is gracious intent. Feels better.
alex1138 11/20/2025|||
Sure, but people have the right to ask questions, as for example Zuck's pledge to give away 99% which people pointed out might be a tax avoidance scheme

The retort was essentially "Can't you just be nice?" but people have the right to ask questions; sometimes the questions reveal much corruption that actually does go on

abustamam 11/20/2025||
I think it is valid to question why he'd be giving away 99% of his fortune, because let's be honest, Zuck has not proven that he is trustworthy. But at the same time, he could just... Not donate that much.

Yes, the 99% did NOT go straight into non-profits, instead being funneled into his foundation, which has donated millions into actual charitable organizations, but that's arguably millions that wouldn't have otherwise gone to those orgs.

Is it a bit disingenuous to say he's donating 99% of his wealth when his foundation has only donated a few hundred million (or few billion?), which is a single percent of his wealth? Yeah, probably. But a few billion is more than zero, and is undeniably helpful to those organizations.

visioninmyblood 11/20/2025|||
Not of fan of the company for the social media but have to appreciate all the open sourcing. none of the other top labs release thier models like meta.
magicalist 11/20/2025|||
> none of the other top labs release thier models like meta

Don't basically all the "top labs" except Anthropic now have open weight models? And Zuckerberg said they were now going to be "careful about what we choose to open source" in the future, which is a shift from their previous rhetoric about "Open Source AI is the Path Forward".

patrickk 11/20/2025||||
They're not doing it out of the goodness of their heart, they're deploying a classic strategy known as "Commoditize Your Complement"[1], to ward off threats from OpenAI and Anthropic. It's only a happy accident that the little guy benefits in this instance.

Facebook is a deeply scummy company[2] and their stranglehold on online advertising spend (along with Google) allows them to pour enormous funds into side bets like this.

[1] https://gwern.net/complement

[2] https://en.wikipedia.org/wiki/Careless_People

unsungNovelty 11/20/2025|||
Not even closely OK with facebook. But none of the other companies do this. And Mark has been open about it. I remember him saying in an interview the same very openly. Something oddly respectable about NOT sugar coating with good PR and marketing. Unlike OpenAI.
arcanemachiner 11/20/2025||||
Well, when your incentives happen to align with those of a faceless mega-corporstion, you gotta take what you can get.
GCUMstlyHarmls 11/20/2025||
You dont have to thank them for it though.
visioninmyblood 11/20/2025|||
I spend years working on training these models. Inference is always the fruit. The effort going into getting the data is the most time consuming part. I am not a fan of meta from a long time. But open sourcing the weights help move the field in general. So I have to be thankful for that.
throwaway98797 11/20/2025|||
you don’t, that’s true

i prefer to say thank you when someone is doing something good

jayd16 11/20/2025||||
We can still like it. We're not nominating Nobel Prizes or something.
_giorgio_ 11/20/2025|||
Among the top 10 tech companies and beyond, they have the most successful open source program.

These projects come to my mind:

SAM segment anything.

PyTorch

LLama

...

Open source datacenters and server blueprints.

the following instead comes from grok.com

Meta’s open-source hall of fame (Nov 2025)

---------------------

Llama family (2 → 3.3) – 2023-2025 >500k total stars · powers ~80% of models on Hugging Face Single-handedly killed the closed frontier model monopoly

---------------------

PyTorch – 2017 85k+ stars · the #1 ML framework in research TensorFlow is basically dead in academia now

---------------------

React + React Native – 2013/2015 230k + 120k stars Still the de-facto UI standard for web & mobile

---------------------

FAISS – 2017 32k stars · used literally everywhere (even inside OpenAI) The vector similarity search library

---------------------

Segment Anything (SAM 1 & 2) – 2023-2024 55k stars Revolutionized image segmentation overnight

---------------------

Open Compute Project – 2011 Entire open-source datacenter designs (servers, racks, networking, power) Google, Microsoft, Apple, and basically the whole hyperscaler industry build on OCP blueprints

---------------------

Zstandard (zstd) – 2016 Faster than gzip · now in Linux kernel, NVIDIA drivers, Cloudflare, etc. The new compression king

---------------------

Buck2 – 2023 Rust build system, 3-5× faster than Buck1 Handles Meta’s insane monorepo without dying

---------------------

Prophet – 2017 · 20k stars Go-to time-series forecasting library for business

---------------------

Hydra – 2020 · 9k stars Config management that saved the sanity of ML researchers

---------------------

Docusaurus – 2017 · 55k stars Powers docs for React, Jest, Babel, etc.

---------------------

Velox – 2022 C++ query engine · backbone of next-gen Presto/Trino

---------------------

Sapling – 2023 Git replacement that actually works at 10M+ file scale

---------------------

Meta’s GitHub org is now >3 million stars total — more than Google + Microsoft + Amazon combined.

---------------------

Bottom line: if you’re using modern AI in 2025, there’s a ~90% chance you’re running on something Meta open-sourced for free.

hamburglar 11/20/2025||
OSQuery
unsungNovelty 11/20/2025|||
I dont think it's open source. It says SAM license. Most likely source available.
uudecoded 11/20/2025|||
Agreed. The community orientation is great now. I had mixed feelings about them after finding and reporting a live vuln (medium-severity) back in 2005 or so.[1] I'm not really into social media but it does seem like they've changed their culture for the better.

[1] I didn't take them up on the offer to interview in the wake of that and so it will be forever known as "I've made a huge mistake."

siva7 11/20/2025|||
If they really deliver a model that can track and describe existing images / videos well that would be a huge breakthrough. There are many extremely useful cases in med, law, surveillance, software and so on. Their competition sucks at this.
throwuxiytayq 11/20/2025||
Disappointingly, every time Zuck hands out some free shit people instantly forget that he and his companies are a cancer upon humanity. Come on dude, "several reasons to not like the company" doesn't fucking cut it.
daemonologist 11/19/2025||
First impressions are that this model is extremely good - the "zero-shot" text prompted detection is a huge step ahead of what we've seen before (both compared to older zero-shot detection models and to recent general purpose VLMs like Gemini and Qwen). With human supervision I think it's even at the point of being a useful teacher model.

I put together a YOLO tune for climbing hold detection a while back (trained on 10k labels) and this is 90% as good out of the box - just misses some foot chips and low contrast wood holds, and can't handle as many instances. It would've saved me a huge amount of manual annotation though.

rocauc 11/19/2025||
As someone that works on a platform users have used for labeling 1B images, I'm bullish SAM 3 can automate at least 90% of the work. Data prep is flipped to models being human-assisted instead of humans being model-assisted (see "autolabel" https://blog.roboflow.com/sam3/). I'm optimistic majority of users can now start deploying a model to then curate data instead of the inverse.
darig 11/20/2025||
[dead]
pierrec 11/20/2025||
I'm guessing you worked on the Stokt app or something similar! It's certainly become one of the best established apps in climbing.
gs17 11/19/2025||
The 3D mesh generator is really cool too: https://ai.meta.com/sam3d/ It's not perfect, but it seems to handle occlusion very well (e.g. a person in a chair can be separated into a person mesh and a chair mesh) and it's very fast.
Animats 11/19/2025|
It's very impressive. Do they let you export a 3D mesh, though? I was only able to export a video. Do you have to buy tokens or something to export?
TheAtomic 11/19/2025|||
I couldn't download it. Model appears to be comparable to Sparc3D, Huyunan, etc but w/o download, who can say? It is much faster though.
visioninmyblood 11/19/2025||
you can download it at https://github.com/facebookresearch/sam3. for 3d https://github.com/facebookresearch/sam-3d-objects

I actually found the easiest way was to run it for free to see if it works for my use case of person deidentification https://chat.vlm.run/chat/63953adb-a89a-4c85-ae8f-2d501d30a4...

WhiteNoiz3 11/19/2025||||
The models it creates are gaussian splats, so if you are looking for traditional meshes you'd need a tool that can create meshes from splats.
bahmboo 11/19/2025||
Are you sure about that? They say "full 3D shape geometry, texture, and layout" which doesn't preclude it being a splat but maybe they just use splats for visualization?
FeiyouG 11/20/2025||
On their paper they mentioned using an "latent 3D grid" internally, which can be converted to mesh/gs using a decoder. The spatial layout of the points shown in the demo doesn’t resemble a Gaussian splat either
ehnto 11/20/2025||
The linked article of the grandparent says "mesh or splats" a bunch, and as you said their examples wouldn't work if it were splats. I feel they are clearly illustrating it's ability to export meshes.
modeless 11/19/2025|||
The model is open weights, so you can run it yourself.
bahmboo 11/19/2025||
Like the models before it it struggles with my use case of tracing circuit board features. It's great with a pony on the beach but really isn't made for more rote industrial type applications. With proper fine-tuning it would probably work much better but I haven't tried that yet. There are good examples on line though.
maurits 11/20/2025||
I would try to take DINO v3 [1] for a spin, for that specific use-case. Or, don't laugh, the Nano Banana [2]

[1]: https://github.com/facebookresearch/dinov3 [2]: https://imgeditor.co/

squigz 11/20/2025||
Wow that sounds like a really interesting use-case for this. Can you link to some of those examples?
bahmboo 11/20/2025||
I don't have anything specific to link to but you could try it yourself with line art. Try something like a mandala or a coloring book type image. The model is trying to capture something that encompasses an entity. It isn't interested in the subfeatures of the thing. Like with a mandala it wants to segment the symbol in its entirety. It will segment some subfeatures like a leaf shaped piece but it doesn't want to segment just the lines such that it is a stencil.

I hope this makes sense and I'm using terms loosely. It is an amazing model but it doesn't work for my use case, that's all!

visioninmyblood 11/20/2025|||
Actually a combination of LLM and VLMs could work in such cases. I just tested on some circuit boards. https://chat.vlm.run/c/f0418b26-af20-4b3d-a873-ff954f5117af
bahmboo 11/20/2025|||
Thanks for taking the time to try that out and sharing it! Our problem is with defects on the order of 50 to 100 microns on bare boards. Defects that only a trained tech with a microscope can see - even then it's very difficult.
visioninmyblood 11/20/2025||
Seems like a exciting problem. Hope you get tools in the future which can solve it well.
squigz 11/20/2025|||
It looks like there could be a lot of potential here for learning/repair/debugging/reverse engineering. Really cool application of this stuff!
bahmboo 11/20/2025||
Generally this is called automated anomaly detection.
sneilan1 11/20/2025|||
Have you found any models that work better for your use case?
bahmboo 11/20/2025||
To answer your question: no but we haven't looked because Sam is sota. Trained our own model with limited success (I'm no expert). We are pursuing a classical computer vision approach. At some level segmenting a monochrome image resembles or is actually an old fashioned flood fill - very generally. This fantastic sam model is maybe not the right fit for our application.

Edit: answered the question

grumbelbart2 11/20/2025||
This is a "classic" machine vision task that has traditionally been solved with non-learning algorithms. (That in part enabled the large volume, zero defect productions in electronics we have today.) There are several off-the-shelf commercial MV tools for that.

Deep Learning-based methods will absolutely have a place in this in the future, but today's machines are usually classic methods. Advantages are that the hardware is much cheaper and requires less electric and thermal management. This changes these days with cheaper NPUs, but with machine lifetimes measured in decades, it will take a while.

bahmboo 11/26/2025|||
Way late response: the off the shelf stuff is very very expensive as one would expect for industrial solutions. I was tasked to build something from scratch (our own solution). It was quite the journey and was not successful. If anyone has pointers or tips in this department I would truly love to hear about them!
squigz 11/20/2025|||
My initial thought on hearing about this was it being used for learning. It would be cool to be able to talk to an LLM about how a circuit works, what the different components are, etc.
Benjamin_Dobell 11/19/2025||
For background removal (at least my niche use case of background removal of kids drawings — https://breaka.club/blog/why-were-building-clubs-for-kids) I think birefnet v2 is still working slightly better.

SAM3 seems to less precisely trace the images — it'll discard kids drawing out the lines a bit, which is okay, but then it also seems to struggle around sharp corners and includes a bit of the white page that I'd like cut out.

Of course, SAM3 is significantly more powerful in that it does much more than simply cut out images. It seems to be able to identify what these kids' drawings represent. That's very impressive, AI models are typically trained on photos and adult illustrations — they struggle with children's drawings. So I could perhaps still use this for identifying content, giving kids more freedom to draw what they like, but then unprompted attach appropriate behavior to their drawings in-game.

warangal 11/20/2025||
I know it may be not what you are looking for, but most of such models generate multiple-scale image features through an image encoder, and those can be very easily fine-tuned for a particular task, like some polygon prediction for your use case. I understand the main benefit of such promptable models to reduce/remove this kind of work in the first place, but could be worth and much more accurate if you have a specific high-load task !
florians 11/20/2025||
Curious about background removal with BiRefNet. Would you consider it the best model currently available? What other options exist that are popular but not as good?
Benjamin_Dobell 11/20/2025||
I'm far from an expert in this area. I've also tried Bria RMBG 1.4, Bria RMBG 2.0, older BiRefNet versions, and I think another I forgot the name of. The fact I'm removing backgrounds that are predominantly white (a sheet of paper) in first place probably changes things significantly. So it's hard to extrapolate my results to general background removal.

BiRefNet 2 seems to do a much better job of correctly removing backgrounds in between the contents outline. So like hands on hips, that region that's fully enclosed but you want removed. It's not just that though, some other models will remove this, but they'll be overly aggressive and remove white areas where kids haven't coloured in perfectly — or like the intentionally left blank whites of eyes for example.

I'm putting these images in a game world once they're cut out, so if things are too transparent, they look very odd.

fzysingularity 11/19/2025||
SAM3 is cool - you can already do this more interactively on chat.vlm.run [1], and do much more. It's built on our new Orion [2] model; we've been able to integrate with SAM and several other computer-vision models in a truly composable manner. Video segmentation and tracking is also coming soon!

[1] https://chat.vlm.run

[2] https://vlm.run/orion

visioninmyblood 11/19/2025|
Wow this is actually pretty cool, I was able to segment out the people and dog in the same chat. https://chat.vlm.run/chat/cba92d77-36cf-4f7e-b5ea-b703e612ea...
luckyLooking 11/19/2025|||
Even works with long range shots. https://chat.vlm.run/chat/e8bd5a29-a789-40aa-ae31-a510dc6478...
fzysingularity 11/19/2025|||
Nice, that's pretty neat.
clueless 11/19/2025||
With a avg latency of 4 seconds, this still couldn't be used in real-time video, correct?

[Update: should have mentioned I got the 4 second from the roboflow.com links in this thread]

Etheryte 11/19/2025||
Didn't see where you got those numbers, but surely that's just a problem of throwing more compute at it? From the blog post:

> This excellent performance comes with fast inference — SAM 3 runs in 30 milliseconds for a single image with more than 100 detected objects on an H200 GPU.

v9v 11/20/2025|||
For the first SAM model, you needed to encode the input image which took about 2 seconds (on a consumer GPU), but then any detection you did on the image was on the order of milliseconds. The blog post doesn't seem too clear on this, but I'm assuming the 30ms is for the encoder+100 runs of the detector.
vlovich123 11/20/2025|||
Even if it was 4s, you can always parallelize the frames to do it “realtime”, just the latency for the output will be 4s (provided you can get a cluster with 120 or 240 GPUs to do 4s of frames going in parallel (if it’s 30ms per image then you only need 2 GPUs to do 60fps on a video stream).
aDyslecticCrow 11/20/2025|||
The model is massive and heavy. I have a hard time seeing this used in real-time. But it's so flexible and accurate it's an amazing teacher for lean CNNs; that's where the real value lies.

I don't even care about the numbers; a visual transformer encoder with output that is too heavy for many edge compute CNNs to use as input isn't gonna cut it.

hansent 11/20/2025|||
p50 latency on roboflow serverless api is 300~400ms roundtrip for sam3 image with text prompt.

You can get an easy to use api endpoint by creating a workflow in roboflow with just the sam3 block in it (and hook up an input parameter to forward prompt to the model), which is then available as an HTTP endpoint. You can use the sam3 template and remove the visualization block if you need just json response for a bit faster latency and smaller payload.

Internally we are getting to run approx ~200ms http roundtrip, but our user facing API currently has some additional latency because we have to proxy a bit to hit a different cluster where we have more GPU capacity for this model allocated than we can currently get on GCP.

yeldarb 11/19/2025||
We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision. The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.

Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).

We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).

We also have a playground[5] up where you can play with the model and compare it to other VLMs.

[1] https://github.com/autodistill/autodistill

[2] https://blog.roboflow.com/sam3/

[3] https://rapid.roboflow.com

[4] https://github.com/roboflow/rf-detr

[5] https://playground.roboflow.com

sorenjan 11/19/2025||
SAM3 is probably a great model to distill from when training smaller segmentation models, but isn't their DINOv2 a better example of a large foundation model to distill from for various computer vision tasks? I've seen it used for as starting point for models doing segmentation and depth estimation. Maybe there's a v3 coming soon?

https://dinov2.metademolab.com/

nsingh2 11/19/2025|||
DINOv3 was released earlier this year: https://ai.meta.com/dinov3/

I'm not sure if the work they did with DINOv3 went into SAM3. I don't see any mention of it in the paper, though I just skimmed it.

yeldarb 11/20/2025|||
We used DINOv2 as the backbone of our RF-DETR model, which is SOTA on realtime object detection and segmentation: https://github.com/roboflow/rf-detr

It makes a great target to distill SAM3 to.

sorenjan 11/20/2025||
> It makes a great target to distill SAM3 to.

Could you expand on that? Do you mean you're starting with the pretrained DINO model and then using SAM3 to generate training data to make DINO into a segmentation model? Do you freeze the DINO weights and add a small adapter at the end to turn its output into segmentations?

dangoodmanUT 11/19/2025|||
I was trying to figure out from their examples, but how are you breaking up the different "things" that you can detect in the image? Are you just running it with each prompt individually?
rocauc 11/19/2025||
The model supports batch inference, so all prompts are sent to the model, and we parse the results.
mchusma 11/19/2025||
Thanks for the linkes! Can we run rf-detr in the browser for background removal? This wasn't clear to me from the docs
yeldarb 11/20/2025||
We have a JS SDK that supports RF-DETR: https://docs.roboflow.com/deploy/sdks/web-browser
hodgehog11 11/19/2025||
This is an incredible model. But once again, we find an announcement for a new AI model with highly misleading graphs. That SA-Co Gold graph is particularly bad. Looks like I have another bad graph example for my introductory stats course...
typpilol 11/20/2025|
Check out the new grok 4.1 graphs. They're even worse
SubiculumCode 11/20/2025|
For my use case, segmentation is all about 3D segmentation of volumes in medical imaging. SAM 2 was tried, mostly using a 2D slice approach, but I don't think it was competitive with the current gold standard nn-unet[1] [1. https://github.com/MIC-DKFZ/nnUNet]
aDyslecticCrow 11/20/2025||
U-net is a brilliant architecture, and it still seems to beat this model in scaling up the segmentation mask from 256x256 back to the real image. I also don't think unet really benefits from the massive internal feature size given by a the visual transformer used for image encoding.

But I'm impressed by the ability of this model to create a image encoding that is independent of the prompt. I feel like there may be lessons in training approach that can be carried over to unet for a more valuable encoding.

visioninmyblood 11/20/2025|||
Agreed that Unet has been the most used model for medical imaging for the last 10 years since the initial Unet paper. I think a combination of Llm+VLMs could be a way forward for medical imaging. I tried it out here and it works great. https://chat.vlm.run/c/e062aa6d-41bb-4fc2-b3e4-7e70b45562cf
davycro 11/20/2025||
Same. My use case is ultrasound segmentation. These models struggle, understandably so, with medical imaging.
More comments...