“We have extraordinarily capable AI that lives in chat windows,” says John Lunsford, founder of the automation startup Tethral. Meanwhile, the environments where people actually need things to happen remain a patchwork of disconnected systems that AI struggles to reach across.

Personal AI assistants have reached a striking level of capability in isolation. They can plan your week, draft your strategy document, and summarize your inbox. But ask any of them to coordinate an actual outcome across your physical and digital life, to set a morning routine that adjusts your thermostat, queues a briefing, starts the coffee, and blocks focus time on your calendar, and they hit a wall. They can reason about what you need. They cannot orchestrate it across the ecosystems where your life actually happens.

This is not a minor gap. McKinsey projects $3 to $5 trillion in agentic commerce by 2030. Google, Anthropic, and the Linux Foundation are racing to build agent-to-agent and agent-to-tool protocols (A2A, MCP, UCP). But all of these efforts assume a coordination layer between AI reasoning and real-world execution that hasn’t been broadly established at scale.

John Lunsford, founder of Tethral, has been building that layer.

The orchestration layer between AI and the physical world

Tethral is an orchestration platform that connects AI to the physical and digital environments people move through daily. Rather than building another assistant, the company builds the coordination infrastructure that lets AI assistants actually do things: execute across smart devices, bridge calendar and task systems, and translate high-level intent into actions that span brands, protocols, and environments.

The technical distinction matters. Today's AI assistants are conversational interfaces with tool-calling capabilities. They can access individual APIs. What they often cannot do is orchestrate across them to produce a coherent real-world outcome. Tethral operates at that layer, sitting between user intent and the fragmented ecosystem of devices, services, and platforms that constitute modern daily life.

Users do not configure automations. They express intent. "Focus," "wind down," "prep for tomorrow's 9 AM." The platform interprets that intent and coordinates the response across whatever is relevant, adjusting lighting, managing notifications, setting climate, and blocking calendar time as a single coherent action rather than a sequence of manual steps across different apps. A task set on a phone during a commute executes at home. A routine configured once adapts to context, time, and location.

Voice is the primary interface, not because it is novel but because it is practical in environments where hands are occupied. The orchestration is designed to work regardless of entry point.

Designed for user authority

Tethral's architecture is built around a principle Lunsford calls user authority: the person in the environment decides what connects, what coordinates, and on what terms. The coordination layer serves the user's intent rather than optimizing for platform engagement or ecosystem retention.

"Every ecosystem that controls your environment also has a business model that depends on keeping you inside it," Lunsford says. "We built Tethral so the orchestration layer is loyal to the person, not the platform."

That positioning aligns with a broader regulatory shift. The EU Data Act and growing U.S. scrutiny of AI assistant practices are creating demand for systems where the user holds authority over how their environment behaves.

Why coordination is the bottleneck, not intelligence

Lunsford's background shapes the thesis. He holds a PhD from Cornell with fellowships at MIT and Oxford in autonomous system-to-society adoption. He previously led AI and safety research at a major tech company.

That work led to a conviction that the bottleneck in making AI useful is not intelligence but the absence of coordination between AI capability and fragmented real-world environments. Reasoning does not adjust a thermostat. Planning does not block a calendar. Generating does not start a coffee machine. Something has to bridge the gap between what AI knows and what the world does.

"Your calendar does not talk to your home," Lunsford says. "Your task manager does not know what room you are in. Your AI assistant can tell you what you should do but cannot make any of it happen. The coordination layer is missing across the board."

From homes to everywhere AI needs to act

Lunsford reports that Tethral went from concept to CES launch in twelve months, building an initial user base on minimal funding. The platform currently orchestrates across smart home ecosystems, with its architecture designed to extend into calendar, task management, and location-aware coordination as the product matures.

The home is where the company is building and testing, because it is the environment where the coordination gap between autonomous systems is most tangible and most personal. But Lunsford sees it as the starting point, not the destination. The same orchestration principles, intent-based, device-agnostic, loyal to the user rather than the platform, apply wherever AI needs to produce an outcome that exists outside a chat window.

The $3 to $5 trillion agentic economy that McKinsey projects will not run on intelligence alone. It will run on coordination. Tethral is betting that the company building that coordination layer from the ground up, starting where people actually live, will have a structural advantage when every AI assistant in the world needs one.


VentureBeat newsroom and editorial staff were not involved in the creation of this content.