• Headline Edit
  • Posts
  • A Big Week for AI: Meta's New SOTA Model, UBI Study, GPT-4o Mini + Free Finetuning, and Voice Standards

A Big Week for AI: Meta's New SOTA Model, UBI Study, GPT-4o Mini + Free Finetuning, and Voice Standards

OpenAI, Meta, Daily, UBI, Sam Altman

It has been another big week in AI. Most notably, there is a paradigm shift in the competition between open and closed-source models. Meta's latest release, Llama 3.1 405B, redefines open-source real-time AI performance with enhanced reasoning and multimodal capabilities. This new version is state-of-the-art for the open-source community, radically improving AI toolkits for AI startups and developers. But that's not all—today's edition also covers the broader implications of advancing AI with the results of the Sam Altman-backed UBI (universal basic income) study being released. OpenAI has also introduced GPT-4o mini, which is reportedly smarter and 60% cheaper than GPT-3.5 Turbo. Additionally, OpenAI has launched free fine-tuning for GPT-4o mini until September. We also saw that Daily released an open standard for Real-time Voice and Video Inference (RTVI-AI). These developments are significant because they make cutting-edge AI technology more accessible and affordable along with the potential implications of UBI. Based on today’s updates, it isn't crazy to imagine a world where your next dentist's appointment is booked by speaking to an AI agent. People could work less and potentially have a steady stream of money coming in each month.

Meta has introduced Llama 3.1, a new set of foundation models designed to rival leading closed-source models in various tasks. These models, including a 405 billion parameter version, boast enhanced reasoning capabilities and a larger 128,000-token context window, along with multimodal features for image and video processing. Key to the model's size and performance are its improved data quality and scale, with training conducted on a diverse and high-quality dataset of 15 trillion multilingual tokens. Meta has made these models publicly available, including both pre-trained and post-trained versions, to foster innovation in the research community and promote the responsible development of artificial general intelligence (AGI). [fb.me] Share this story by email

OpenAI has launched GPT-4o mini, which they say is smarter and 60% cheaper than GPT-3.5 Turbo. OpenAI has also launched fine-tuning for GPT-4o mini. GPT-4o mini excels in reasoning, math, coding, and multimodal tasks, outperforming GPT-3.5 Turbo and other small models on several key benchmarks. OpenAI also mentioned in another release that they will be offering free fine-tuning for the model with the first 2 million training tokens per day are free until September 23! These super small and cheaper models are important for highly repetitive tasks that don’t need a larger or more expensive model (and because they use significantly less power, they are much better for the the wallet and the planet!) [openai.com] Share this story by email

Daily has launched an open standard for Real-time Voice and Video Inference (RTVI-AI) along with open-source JavaScript and React SDKs, with iOS and Android SDKs coming soon. According to the release, RTVI-AI defines how client applications communicate with inference services, enabling use cases like voice chat with LLMs, enterprise voice workflows, video avatars, voice-driven user interfaces, and high-framerate image generation. The demo leverages Llama 3.1 running on @GroqInc and has impressive 500ms voice-to-voice response times (which is comparable to real-life conversations!) and shows how far the frontier of tech for live voice agents has come in a short time. [github.com] Share this story by email

A study by OpenResearch with backing from Sam Altman examined the effects of giving $1,000 per month to low-income individuals. The study explored the impact on spending, agency, employment, health, and moving. The summary findings stated that: "The program resulted in a 2.0 percentage point decrease in labor market participation for participants and a 1.3-1.4 hour per week reduction in labor hours, with participants’ partners reducing their hours worked by a comparable amount. The transfer generated the largest increases in time spent on leisure, as well as smaller increases in time spent in other activities such as transportation and finances. Despite asking detailed questions about amenities, we find no impact on quality of employment, and our confidence intervals can rule out even small improvements. We observe no significant effects on investments in human capital, though younger participants may pursue more formal education. Overall, our results suggest a moderate labor supply effect that does not appear offset by other productive activities." [openresearchlab.org] Share this story by email