- Headline Edit
- Posts
- AI: OpenAI & Google ship big updates, World Labs' 3D/spatial demo, Liquid challenges transformers, and breaking down the data scarcity hype... (12.16.24)
AI: OpenAI & Google ship big updates, World Labs' 3D/spatial demo, Liquid challenges transformers, and breaking down the data scarcity hype... (12.16.24)
OpenAI, Google, WorldLabs, Liquid.AI
The last few weeks have been packed with exciting releases, and in this edition, we’ll dive into the major updates, along with a short thought piece on a persistent narrative: have we really “run out of data,” and is AI progress slowing down as a result?
Happy Holidays!
Sasha Krecinic
Is AI progress starting to 'slow down' because we have 'consumed' all of the data?
Short Answer: No.
Long Answer: AI's momentum is parallelized now, meaning there are several viable pathways to explore, both on the research and scaling side. Yes, some models are huge and underperform expectations, and some models aren't released due to competitive concerns or safety and alignment concerns. However, it isn’t wise to judge progress based on headlines or one variable like ‘total publicly available data’.
Unfortunately, the headlines have focused on the fact industry figures like Ilya Sutskever and other researchers have stated that the era of pre-training is coming to a close. However, the best way to track the frontier is by tracking the research developments. What they might not mention is Ilya also recently said, "Scaling the right thing matters more now than ever.” What he is drawing upon is that there are several different doors to explore, and some might be dead ends, hence the benefit and focus on parallelization by the major AI labs. Both Sam Altman and Dario Amodei stated they have a clear line of sight on where to build for the next 18-24 months. Beyond that, it is arguably hard to plan because the frontier is moving so quickly.
Why are people saying this?
These comments are usually taken out of context. They often reflect one small part of the picture and, sadly, can be quite misleading at times. Some of the biggest developments have occurred in the last two months and even came sooner than many in the field expected. Here are a few examples:
- Test time training/compute
- Real-time voice APIs / Live AI screen vision
- AI Computer Use
Each of these has the potential to transform industries and the overall surface area AI is touching here is expanding much faster than the data is being ‘consumed’. The thing that stumps most people in this industry is how little coverage these developments have gotten. So when someone says AI progress is "losing steam," ask them what they think about the research pathways and how quickly AI’s surface area is expanding…
[Why AI is not losing steam...] Share this story by email
In case you missed it, OpenAI’s 12 Days of back-to-back releases (so far):
Day 1: o1 & ChatGPT Pro (Premium features with $20-$200 plans)
Day 2: Reinforcement Fine-Tuning Program (Research applications open)
Day 3: Sora (Video generation and remixing)
Day 4: Canvas (Collaborative coding and writing)
Day 5: ChatGPT in Apple Intelligence (Ecosystem integration)
Day 6: Advanced Voice & Santa Mode (Voice+video and festive fun)
Day 7: Projects in ChatGPT (Organize and manage projects)
Day 8: ChatGPT Search (Real-time web answers with links)
Some of these were expected, and others have been a complete surprise to me, like the Reinforcement Fine-Tuning Program. It’s a big development because it lets developers shape model behavior through iterative feedback loops rather than being stuck with static datasets. Traditional fine-tuning methods rely on adjusting model parameters with fixed training examples, but RL FT incorporates evaluative signals—like user feedback or predefined reward criteria—directly into the training process. This makes it possible to optimize a model’s responses toward desired outcomes more dynamically, improving its ability to handle complex tasks, follow specific instructions, and maintain quality and alignment over time!
[12 Days of OpenAI ] Share this story by email
Google has launched Gemini 2.0, described as an “AI model for the agentic era,” according to their recent blog post. The model offers advanced multimodal capabilities and native tool use (e.g., search), with an experimental version called Gemini 2.0 Flash now available to developers, reportedly doubling the speed of its predecessor. Potentially most impressive is that Google Studio introduces a screen-sharing capability that allows users to have a video call with it and receive live assistance with anything on their screen in real time. Google also announced it is testing browser control for tasks like collecting contact information on web pages from a chat-based control panel.
[Introducing Gemini 2.0: our new AI model for the agentic era] Share this story by email
A big development in spatial intelligence, World Labs has introduced an AI system capable of generating interactive 3D worlds from a single 2D image. The company raised $230M in September 2024 and is already showcasing impressive demos. Unlike conventional tools that produce static visuals, this new technology allows users to fully explore scenes, peering around corners and examining details in real time. Early demos show how the tool can transform creative workflows for artists, filmmakers, and game developers, offering unprecedented control and fidelity in digital environments. Check out the impressive demo here:
[3D AI worlds coming soon] Share this story by email
Liquid AI has raised $250 million in a Series A round led by AMD Ventures to advance its Liquid Foundation Models, which are lightweight and general-purpose. The company says it will use the funds to enhance its computing infrastructure and expedite product readiness for edge and on-premise applications, fine-tuning, and is looking to bring their solution to a broader audience.
[We raised $250M to scale capable and efficient general-purpose AI] Share this story by email