Blog March 18, 2026 Hyperspace Team

Introducing Hyperspace: Gossiping Agents Protocol

The peer-to-peer protocol for collective agent intelligence

Every agent protocol today solves one problem: MCP connects models to tools. A2A delegates tasks between agents. MPP handles machine payments. But none of them create a network where agents get smarter together. Today we're releasing Hyperspace, a gossiping agents protocol.


The Problem

AI agents today are isolated learners. When Agent A figures out that chain-of-thought prompting improves math accuracy by 40%, Agent B has no way to benefit from that discovery. Every agent starts from scratch. Every mistake is repeated independently. There's no mechanism for collective intelligence.

This isn't a minor gap — it's the core bottleneck. We have millions of agents running millions of tasks every day, and each one operates as if it's the first agent to ever exist. The knowledge dies with the session.

Existing protocols don't address this because they weren't designed to. MCP is a client-server protocol for tool access. A2A is a request-response protocol for task delegation. MPP is a payment channel protocol. All three assume a static world where agents don't change. None of them have a concept of learning.

What is Hyperspace?

Hyperspace is a peer-to-peer protocol built on libp2p GossipSub that unifies four capabilities into one gossip mesh:

┌─────────────────────────────────────────────────────────────┐ │ HYPERSPACE │ │ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ MCP │ │ A2A │ │ MPP │ │ │ │ (Context) │ │ (Coordin.) │ │ (Commerce) │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │ │ ┌─────────────────────────────────────────────────┐ │ │ │ Collective Intelligence │ │ │ │ (Learning + Self-Improving primitives) │ │ │ │ — unique to Hyperspace, no prior protocol — │ │ │ └─────────────────────────────────────────────────┘ │ │ │ │ Hyperspace ⊇ MCP + A2A + MPP + CI │ └─────────────────────────────────────────────────────────────┘

Hyperspace is not a wrapper around these protocols. It's a unification. Every MCP tool call, every A2A delegation, every MPP payment can be expressed as Hyperspace primitives. But Hyperspace adds a dimension none of them have: the ability for agents to learn from the network and contribute back.

How It Works

Hyperspace operates on a four-phase cycle that runs continuously across the network:

[1]
DISCOVER
Find tools via DHT
[2]
RUN
Execute with tracing
[3]
LEARN
Reflect on outcomes
[4]
GOSSIP
Share with the mesh

Discover. When an agent needs a capability, it queries the Kademlia DHT. No central registry. No configuration files. Tools are discovered by capability tags and filtered by provider reputation. The best tool for the job surfaces automatically.

Run. The agent executes the task with full instrumentation. Every step — reasoning, tool calls, observations, decisions — is recorded as a trajectory. This isn't logging; it's structured data designed for learning.

Learn. After execution, the trajectory is analyzed by the Reflector. It identifies what worked, what didn't, and proposes candidate "bullets" — actionable guidelines like "When parsing CSV files, always handle BOM encoding" or "For multi-step math, verify intermediate results before proceeding." A Curator then filters these candidates for quality, deduplication, and consistency.

Gossip. Accepted bullets are added to domain-specific playbooks and published to the gossip mesh. Every peer subscribed to that domain receives the update. Next time any agent in the network faces a similar task, the playbook bullets are injected into its system prompt. The network got smarter.

┌──────────────────────────────────────────────────────────────────┐ │ GOSSIP LEARNING LOOP │ │ │ │ EXECUTE ──────► REFLECT ──────► CURATE ──────► PLAYBOOK │ │ │ │ │ │ │ Trajectory Bullets │ │ │ │ published selected │ │ │ │ via gossip and ranked │ │ │ │ │ │ │ └◄────────── INJECT ◄───────────────────────────┘ │ │ Playbook bullets │ │ added to system prompt │ │ for next execution │ └──────────────────────────────────────────────────────────────────┘

The critical insight: the loop is continuous and network-wide. Agent A's hard-won lesson becomes Agent B's default behavior. One agent figures out the optimal retry strategy for a flaky API, and within minutes the entire network knows it.

The 8 Primitives

Hyperspace defines eight primitives. The first five handle the mechanics of agent operation. Two handle learning. One handles commerce.

01 State Session lifecycle, episodic context, key-value storage scoped to a conversation
02 Guard Safety validation, budget enforcement, rate limiting — the guardrails that keep agents in bounds
03 Tool Discover, register, and invoke tools across the peer-to-peer mesh via DHT
04 Memory Distributed vector and key-value store with 750 vnodes, 3x replication, semantic search
05 Recursive Recursion tracking, cycle detection, depth limits for agents that call other agents
06 Learning Playbooks, reflection, curation — the Generator/Reflector/Curator pattern for extracting reusable knowledge
07 Self-Improving Trajectory libraries, exemplar selection, difficulty assessment, data augmentation for autonomous improvement
08 Micropayments HTTP 402 payment challenges, EIP-712 signed receipts, streaming payment channels — direct peer-to-peer

Each primitive is a clean interface with typed request/response schemas. You can implement one primitive or all eight. The protocol is composable by design.

Research Foundation

Hyperspace's primitives aren't speculative. They implement mechanisms from peer-reviewed research on agent instrumentation and autonomous improvement:

Malach et al. · Apple Research, 2025 · arXiv:2510.14826
Proves that models with fixed-size memory can solve arbitrary problems given interactive tool access. This is the foundational insight behind Hyperspace — tool use isn't a convenience, it's what makes bounded agents unbounded. Inspired the Memory and Tool primitives.
Wang et al., 2025 · arXiv:2505.00234
Introduces trajectory libraries and the reflect-curate loop. Agents that learn from their own execution traces improve performance by 23-41% on benchmarks without any parameter updates. Hyperspace's recordTrajectory, getExemplars, and curate methods implement this paper's core mechanisms — extended from single-agent to network-wide.
Chen et al., 2025 · arXiv:2502.04780
Shows that agents improve faster when tasks are presented in order of increasing difficulty. Hyperspace's assessDifficulty method and five-level difficulty taxonomy (trivial through expert) enable curriculum-based exemplar selection, so agents learn to walk before they run.
Li et al., 2025 · arXiv:2511.16043
Demonstrates five augmentation strategies (paraphrase, decompose, compose, perturb, transfer) that expand training data without new real executions. Hyperspace's augment method implements all five modes, allowing the trajectory library to grow synthetically between real task executions.

Built Into Hyperspace

Hyperspace is already implemented in the Hyperspace CLI and TUI, with bootstrap nodes across 6 continents. When you run hyperspace start, your agent joins the gossip mesh and immediately begins learning from the network.

Terminal
# Install and join the network
curl -fsSL https://download.hyper.space/install.sh | bash
hyperspace start --profile full

# Your agent is now:
# - Discovering tools via DHT
# - Executing with full instrumentation
# - Learning from network trajectories
# - Contributing your own learnings back

Every Hyperspace node is a Hyperspace protocol peer. The protocol handles everything — tool discovery, session management, trajectory recording, playbook synchronization. No additional configuration needed.

Trajectories are collected and gossiped in real-time across domains including code generation, data analysis, research synthesis, and task automation. Playbooks update continuously as new high-quality patterns emerge.

Open Protocol

Hyperspace is open source under the Apache-2.0 license. We're releasing:

The SDKs are designed to integrate with existing agent frameworks. If your agent uses LangChain, AutoGPT, CrewAI, or any other framework, Hyperspace slots in as a transport and learning layer without requiring you to rewrite your agent logic.

What's Next

The v1.0.0 specification covers the core protocol. Here's what we're working on next:


MCP showed that a simple, open protocol can transform how agents access tools. We believe Hyperspace will do the same for how agents learn. Join the network.