API & Protocol

AMCP Surface For Shared Agent Memory

Use Nexus AMCP when your agents need recall, remember, sessions, export, and shared team memory through one contract. On Nunchi Team, Norfolk knowledge joins the same memory surface with citations and provenance intact.

Memory Contract

What This Surface Exposes

The point is not another wrapper. The point is one stable protocol surface that agents, dashboards, and memory-backed tools can call without re-implementing memory semantics each time.

  • Hosted remember, recall, session lookup, export, import, and delete flows
  • A shared contract across Codex, Claude Code, Cursor, Windsurf, and other supported clients
  • Visibility-aware recall so private and team memory do not leak into the wrong scope
  • A protocol surface you can use directly when MCP wiring is not the right fit

Reference Client

Start From The Working Path

The reference MCP package and the hosted gateway share the same memory contract, so you can verify the real behavior before building your own client.

Reference install

npx @nunchiai/nexus-mcp init --client codex --key sk_write_your_key --gateway-url https://gateway.nunchiai.com --yes

Reference Routes

Core AMCP Routes

These are the reference routes exposed today through the hosted AMCP surface.

POST /v1/amcp/remember
POST /v1/amcp/recall
GET /v1/amcp/sessions
POST /v1/amcp/export
POST /v1/amcp/import
DELETE /v1/amcp/memories/:id

Norfolk Layer

What Norfolk Adds On Nunchi Team

Team recall is not just chat memory. Norfolk brings the document-backed side of context into the same surface.

  • Citation links so derived outputs can point back to the atoms they used
  • Provenance tiers so agent-generated memory can be filtered from source material
  • Visibility scope for private, team, and team-readonly knowledge
  • Source type metadata so clients can tell whether recall came from Norfolk, Nexus, or synthesis

Next Step

Open The Hosted Memory Path

Issue a key, connect your client, and move to Nunchi Team when the problem is keeping the whole team on the same memory instead of chasing larger models.