You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for you wonderful project!!. I have question about where an image contains 4 identical objects but with different orientations, I'm unsure about the best approach for mask input during training. Would it be necessary to input 4 separate pairs of masks for this object, or should I generate a single image output that includes all the masks for these objects, each with a distinct mask number? I want to make the model to output all the masks with different mask id. The example image is below. Thanks so much!!!
The text was updated successfully, but these errors were encountered:
Hi, based on your description, I think your needs fall under the category of "instance segmentation". Unfortunately, SAM2-UNet framework is based on "semantic segmentation" currently and the output does not contain the mask id. There is a temporary solution that can meet your task requirements:
Annotate the image, where one single mask contains all the objects you want to identify (e.g. all four objects in the image).
For the segmentation prediction, use OpenCV to calculate the connected domain of mask, and each connected domain is assigned an id as a different object.
A more correct solution is to wrap SAM2-UNet with a unified segmentation framework to allow instance segmentation, such as K-Net [1].
Thanks for you wonderful project!!. I have question about where an image contains 4 identical objects but with different orientations, I'm unsure about the best approach for mask input during training. Would it be necessary to input 4 separate pairs of masks for this object, or should I generate a single image output that includes all the masks for these objects, each with a distinct mask number? I want to make the model to output all the masks with different mask id. The example image is below. Thanks so much!!!

The text was updated successfully, but these errors were encountered: