Breaking
Markets
EUR/USD1.1770 0.04%GBP/USD1.3607 0.03%USD/JPY157.15 0.01%USD/CHF0.7782 0.09%AUD/USD0.7245 0.06%USD/CAD1.3673 0.04%USD/CNY6.8093 0.19%USD/INR95.37 0.06%USD/BRL4.8968 0.10%USD/ZAR16.43 0.08%USD/TRY45.39 0.02%Gold$4,697.10BTC$80,667 0.58%ETH$2,287 2.10%SOL$95.35 0.14%
Tech

Mira Murati's Thinking Machines unveils 'interaction models' for real-time AI collaboration

The Verge3 h ago
Abstract data network visualisation with connecting nodes
Photo: Google DeepMind / Pexels

Thinking Machines, the AI company founded by former OpenAI Chief Technology Officer Mira Murati, has announced that it is working on what it calls "interaction models." Murati had been chief technology officer at OpenAI for several years before leaving in late 2024 to found Thinking Machines. The company made its first major technical announcement on Monday.

By Thinking Machines' definition, interaction models will let people "collaborate with AI the way we naturally collaborate with each other — they continuously take in audio, video, and text, and think, respond, and act in real time." The company described the key difference from traditional models as continuous awareness rather than discrete request-response.

Traditional large language models (LLMs) wait until a user finishes typing a message or until a conversation ends. Thinking Machines wrote that today's models "experience reality in a single thread. Until the user finishes typing or speaking, the model waits with no perception of what the user is doing or how the user is doing it."

Interaction models propose a different architecture. As described by Murati's team, the models access data streams in real time, monitor and evaluate those streams continuously, and plan responses collaboratively with the user. The approach is designed to give Thinking Machines an advantage in voice- and video-based use cases.

Use cases listed include real-time note-taking and summarisation in professional meetings, one-on-one tutoring in education settings, real-time clinical decision support in healthcare, and creative collaboration. The company said interaction models will likely be made available initially through its own applications rather than through an API.

Murati's team includes several notable former OpenAI staff. Former OpenAI researchers John Schulman, Lukasz Kaiser and Jonathan Lachman are among the company's founding team. The company closed a Series A round in September 2024 at a valuation of about $11.2 billion. Investors included Andreessen Horowitz, Sequoia and Goldman Sachs.

Thinking Machines' closest competitors include OpenAI, Anthropic and Google DeepMind, all of which are working on real-time multi-modality. OpenAI's "Realtime" API, announced in late 2025, offers parallel processing of audio and text. The difference in Murati's team's approach lies in combining those abilities within a broader architectural framework.

On architectural detail, the company has not yet published a technical paper. A first preview release for users and developers is scheduled for summer 2026. The company said developer documentation will centre on a "continuous streaming API," presenting a compute paradigm different from existing request-response models.

Murati acknowledged in her statement that interaction models raise new ethical and privacy questions. Continuous audio and video intake requires new standards for handling and storing user data. The company said it has developed an on-device processing layer that keeps user data local, calling that approach critical for privacy.

The announcement points to a notable sub-plot in the AI sector: the 2025-2026 window is being read as the start of a shift from "agent" and "reasoning" models toward "continuous interaction" approaches. Thinking Machines' work may provide an early indicator of whether the industry experiences a broader directional change in the months ahead.

This article is an AI-curated summary based on The Verge. The illustration is a stock photo by Google DeepMind from Pexels.