AI agents can connect together, but they cannot think together. That’s a huge difference and a bottleneck for next-gen systems, says Outshift by Cisco’s SVP and GM Vijoy Pandey.

As he describes the current state of AI: Agents can be stitched together in a workflow or plug into a supervisor model — but there's no semantic alignment, no shared context. They’re essentially working from scratch each go-around. 

This calls for next-level infrastructure, or what Pandey describes as the "internet of cognition." 

“Agents are not able to think together because connection is not cognition,” he said. “We need to get to a point where you are sharing cognition. That is the greater unlock.”

Creating new protocols to support next-gen agent communication

So what is shared cognition? It’s when AI agents or entities can meaningfully work together to solve for something net new that they weren’t trained for, and do it “100% without human intervention,” Pandey said on the latest episode of Beyond the Pilot.

The Cisco exec analogizes it to human intelligence. Humans evolved over hundreds of thousands of years, first becoming intelligent individually, then communicating on a basic level (with gestures or drawings). That communication improved over time, eventually unlocking a ‘cognitive revolution’ and collective intelligence that allowed for shared intent and the ability to coordinate, negotiate, and ground and discover information. 

“Shared intent, shared context, collective innovation: That's the exact trajectory that's playing out in silicon today,” Pandey said. 

His team sees it as a “horizontal distributed assistance problem.” They are pursuing “distributed super intelligence” by codifying intent, context, and collective innovation as a set of rules, APIs, and capabilities within the infrastructure itself. 

Their approach is a set of new protocols: Semantic State Transfer Protocol (SSTP); Latent Space Transfer Protocol (LSTP); and Compressed State Transfer Protocol (CSTP). 

SSTP operates at the language level, analyzing semantic communication so systems can infer the right tool or task. Pandey's team recently collaborated with MIT on a related piece called the Ripple Effect Protocol.

LSTP can be used to transfer the “entire latent space” of one agent to another, Pandey explained. “Can we just take the KV cache and send it over as an example?” he said. “Because that would be the most efficient way: instead of going through the tax of tokenizing it, going to a natural language, then going back the stack on the other side.” 

CSTP handles compression — grounding only the targeted variants while compressing everything else. Pandey says it's particularly well-suited for edge deployments where you need to send large amounts of state accurately.

Ultimately, Pandey’s team is building a fabric to scale out intelligence and ensure that cognition states are synchronized across endpoints. Further, they are developing what they call “cognition engines” that provide guardrails and accelerate systems. 

“Protocols, fabric, cognition engines: These are the three layers that we are building out in the pursuit of distributed super intelligence,” Pandey said. 

How Cisco solved a big pain point

Stepping back from these advanced, next-level systems, Cisco has achieved tangible results with existing AI capabilities. Pandey described a specific pain point with the company’s site reliability engineering (SRE) team. 

While they were churning out more and more products and code, the team itself wasn’t growing, and were feeling pressure to improve efficiency. Pandey introduced AI agents that automated more than a dozen end-to-end workflows, including continuous integration/continuous delivery CI/CD pipelines, EC2 instance spin-ups and Kubernetes cluster deployments. 

Now, more than 20 agents — some built in-house, some third-party — have access to 100-plus tools via frameworks like Model Context Protocol (MCP), while also plugging into Cisco’s security platforms. 

The result: A decrease from “hours and hours to seconds” with certain deployments; further, agents have reduced 80% of the issues the SRE team were seeing within Kubernetes workflows.

Still, as Pandey noted, AI is a tool like any other. “It does not mean that I have a new hammer and I'm just gonna go around looking for nails,” he said. “You still have deterministic code. You need to marry these two worlds to get the best outcome for the problem that you're solving.”

Listen to the podcast to hear more about: 

  • How we are now enabling a new paradigm of non-deterministic computing. 

  • How Cisco bumped error detection capabilities in large networks from 10% to 100%. 

  • How Pandey named his own AI agent "Arnold Layne" after an early Pink Floyd song.

  • Why the "internet of cognition" must be an open, interoperable effort. 

  • How Cisco’s open source project Agntcy addresses discovery, identity and access management (IAM), observability, and evaluation.

You can also listen and subscribe to Beyond the Pilot on Spotify, Apple or wherever you get your podcasts.