Mark Zuckerberg Showcases Meta's Latest Video AI Tool, Segment Anything Model 2: Here's How It Works
According to Meta, SAM 2 AI model's capabilities extend beyond images to include real-time object tracking across video frames
Meta has introduced an upgraded video-editing-friendly artificial intelligence (AI) model, Segment Anything Model 2 (SAM 2), designed to handle intricate computer vision tasks. This announcement, made on Monday, marks a significant enhancement over the previous version launched last year, which was utilised in Instagram's Backdrop and Cutouts features. Meta boss Mark Zuckerberg took to Instagram to show how the new feature works in a fun video.
Here's what Zuckerberg posted:
View this post on Instagram
What Can SAM 2 Do?
SAM 2 boasts advanced capabilities, including segment identification and tracking within videos. According to Meta, the model's capabilities extend beyond images to include real-time object tracking across video frames, even in challenging scenarios where objects move rapidly, change appearance, or are obscured.
The new model builds on a transformer architecture and includes a streaming memory component, allowing it to process video segments efficiently. Meta emphasised that SAM 2 was trained on its extensive video segmentation dataset, known as the SA-V dataset, ensuring robust performance in various applications.
Meta highlighted that the initial version of SAM had diverse applications, such as aiding marine scientists in segmenting sonar images to analyze coral reefs, supporting disaster relief through satellite imagery analysis, and assisting in medical fields by segmenting cellular images for skin cancer detection.
As an open-source model, SAM 2's code and weights are accessible on Meta's GitHub page, licensed under Apache 2.0. This license permits research, academic, and non-commercial use, inviting widespread testing and innovation.
Meta envisions SAM 2 facilitating video editing, AI-driven video generation, and enhancing mixed-reality experiences. Additionally, its object-tracking feature is expected to streamline the annotation of visual data, thereby improving the training of other computer vision systems.
Overall, the introduction of SAM 2 underscores Meta's commitment to advancing AI technology and making sophisticated tools available for broader use within the tech and scientific communities.