- Headline Edit
- Posts
- OpenAI launches Search, Meta's ‘Segment Anything Model’, and Llama 3.1 Climbs the Leaderboard (7.31.24)
OpenAI launches Search, Meta's ‘Segment Anything Model’, and Llama 3.1 Climbs the Leaderboard (7.31.24)
OpenAI, Meta, Search, SAM
This week, we saw two new products launch that influence search and video processing capabilities. OpenAI's Search product has finally been released in closed beta after many months of rumors, and Meta's SAM 2 (Segment Anything Model) promises to redefine real-time object segmentation and tracking in images and videos.
– Sasha Krecinic
OpenAI is testing a new prototype called SearchGPT, designed to provide fast and timely answers with clear and relevant sources. According to a blog post, the prototype is being launched with a small group of users for feedback, with plans to integrate it into ChatGPT to enhance real-time search capabilities. The search product was launched a few weeks after their acquisition of RockSet. Rumors were also swirling for several weeks that OpenAI would launch a search product. A notable OpenAI employee, Noam Brown, also commented that this is "another step toward general AI personal assistants for all." [openai.com] Share this story by email
Meta's Llama-3.1-405B has reached #3 on the Overall Arena leaderboard, marking the first time an open model has made the top 3. The model was tested over the past week, receiving over 10K community votes. [lmsys.org] Share this story by email
Meta has launched the Segment Anything Model 2 (SAM 2), a groundbreaking unified model for real-time, promptable object segmentation in both images and videos. Following the success of SAM, SAM 2 offers state-of-the-art performance and introduces a novel "memory attention" feature that uses a transformer with memory across frames. It stores special "object pointer" tokens in a "memory bank" FIFO queue of recent and prompted frames. SAM 2 can segment any object in any video or image, even those it has not seen before, enabling a diverse range of use cases without custom adaptation. It is open source under the Apache 2.0 license and includes the SA-V dataset, containing approximately 51,000 real-world videos and more than 600,000 masklets. Meta provides a web demo, research paper, and datasets that are worth checking out! [fb.me] Share this story by email