Meta Unveils SAM, An AI Tool That Can Identify Objects Within An Image; Here’s How It Will Work

New York: Facebook parent Meta on Wednesday released a paper detailing its latest A.I. model that can “segment” different items within photographs. The company’s research division said it released the Segment Anything Model (SAM), and the corresponding dataset to foster research into foundation models for computer vision.

Meta said SAM is capable of identifying objects within images and videos even in cases where it had not encountered those items in its training. Users can select objects by clicking on them or by using text prompts, such as the word “cat” or “chair” and so on. In a demonstration, SAM was able to draw boxes around multiple cats in a photo accurately in response to the written prompt.

Here’s How SAM Works

How is SAM different from other A.I. models

  • SAM is promptable, which means it can take various input prompts, such as points or boxes, to specify what object to segment. For example, you can draw a box around a person’s face, and the Segment Anything Model will generate a mask for the face. You can also give multiple prompts to segment multiple objects at once. The SAM model can handle complex scenes with occlusions, reflections, and shadows.
  • SAM is trained on a massive dataset of 11 million images and 1.1 billion masks, which is the largest segmentation dataset to date. This dataset covers a wide range of objects and categories, such as animals, plants, vehicles, furniture, food, and more. SAM can segment objects that it has never seen before, thanks to its generalization ability and data diversity.
  • SAM has strong zero-shot performance on a variety of segmentation tasks. Zero-shot means that SAM can segment objects without any additional training or fine-tuning on a specific task or domain. For example, SAM can segment faces, hands, hair, clothes, and accessories without any prior knowledge or supervision. SAM can also segment objects in different modalities, such as infrared images or depth maps.

Segment Anything Model (SAM model) features

AdChoices

India.com

India.comFollow

Meta Unveils SAM, An AI Tool That Can Identify Objects Within An Image; Here’s How It Will Work

Story by support@india.com (India.com News Desk) • 5h ago

MARKETS TODAY

SENSEX▲ ‎+0.24%‎

NIFTY▲ ‎+0.24%‎

Gold▼‎-0.45%‎

Silver▲ ‎+0.22%‎

USD/INR▼‎-0.04%‎

New York: Facebook parent Meta on Wednesday released a paper detailing its latest A.I. model that can “segment” different items within photographs. The company’s research division said it released the Segment Anything Model (SAM), and the corresponding dataset to foster research into foundation models for computer vision.

Meta said SAM is capable of identifying objects within images and videos even in cases where it had not encountered those items in its training. Users can select objects by clicking on them or by using text prompts, such as the word “cat” or “chair” and so on. In a demonstration, SAM was able to draw boxes around multiple cats in a photo accurately in response to the written prompt.

Here’s How SAM Works

Meta Unveils SAM, An AI Tool That Can Identify Objects Within An Image; Here’s How It Will Work© Provided by India.com

How is SAM different from other A.I. models

  • SAM is promptable, which means it can take various input prompts, such as points or boxes, to specify what object to segment. For example, you can draw a box around a person’s face, and the Segment Anything Model will generate a mask for the face. You can also give multiple prompts to segment multiple objects at once. The SAM model can handle complex scenes with occlusions, reflections, and shadows.
  • SAM is trained on a massive dataset of 11 million images and 1.1 billion masks, which is the largest segmentation dataset to date. This dataset covers a wide range of objects and categories, such as animals, plants, vehicles, furniture, food, and more. SAM can segment objects that it has never seen before, thanks to its generalization ability and data diversity.
  • SAM has strong zero-shot performance on a variety of segmentation tasks. Zero-shot means that SAM can segment objects without any additional training or fine-tuning on a specific task or domain. For example, SAM can segment faces, hands, hair, clothes, and accessories without any prior knowledge or supervision. SAM can also segment objects in different modalities, such as infrared images or depth maps.

Segment Anything Model (SAM model) features

Related video: Meta is working on a decentralised social network app (WION)

is working on one such platform for sharing tech updates.

Loaded: 50.86%Play

Current Time 0:14

/

Duration 1:57Quality SettingsSubtitlesFullscreen

WION

Meta is working on a decentralised social network appUnmute

0

View on Watch

More videos

  • Using the SAM model, users may quickly and easily segment objects by selecting individual points to include or omit from the segmentation. A boundary box can also be used as a cue for the model.
  • When uncertainty exists regarding the item being segmented, the SAM model can produce many valid masks, a crucial and critical skill for solving segmentation in the real world.
  • Automatic object detection and masking are now simple with the Segment Anything Model.
  • After precomputing the image embedding, the Segment Anything Model can provide a segmentation mask for any prompt instantly, enabling real-time interaction with the model.

How To Use The Segment Anything Model (SAM model)?

SAM is developed by Meta AI Research (formerly Facebook AI Research), and it is publicly available on GitHub. You can also try SAM online with a demo or download the dataset (SA-1B) of 1 billion masks and 11 million images.

  1. Download the demo or go to the Segment Anything Model demo.
  2. Upload an image or choose one in the gallery.
  3. Add and subject areas
  4. Mask areas by adding points. Select Add Area, then select the object. Refine the mask by selecting
  5. Remove Area, then select the area.

Meta has been experimenting with generative AI, which creates new content rather than simply identifying or categorising data. CEO Mark Zuckerberg has said that incorporating such technology into Meta’s apps is a priority this year. Examples of generative AI tools that the company is developing include one that creates surreal videos from text prompts and another that generates children’s book illustrations from prose.

Meta already uses similar technology internally to tag photos, moderate prohibited content, and recommend posts to users of Facebook and Instagram. The release of SAM is expected to broaden access to this type of technology.

Leave a Reply

Your email address will not be published. Required fields are marked *