Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
dangoodmanUT
23 days ago
|
parent
|
context
|
favorite
| on:
Meta Segment Anything Model 3
I was trying to figure out from their examples, but how are you breaking up the different "things" that you can detect in the image? Are you just running it with each prompt individually?
rocauc
23 days ago
[–]
The model supports batch inference, so all prompts are sent to the model, and we parse the results.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: