A Product by 13afoundry
Experimental

Persistent Memory
for AI Agents.

SuperMemory gives your AI semantic memory that persists across sessions. Store knowledge, recall context, and build on past conversations.

Works with
Claude
Claude Desktop & Web
ChatGPT
ChatGPT Plus or Team
Cursor
Cursor Any plan
Gemini
Gemini CLI API key
Provides 3 MCP Tools
💾 store_memory
🔍 retrieve_memory
🗑 delete_memory
View Setup Guide →

How It Works

Modern retrieval technology for fast, accurate memory recall.

🧮

Vector Embeddings

Memories are embedded as vectors using OpenAI or Gemini models. Search finds memories by meaning, not exact keywords.

🏆

Cross-Encoder Reranking

Two-stage retrieval: fast vector similarity first, then cross-encoder reranking for precision. Returns the most relevant memories.

🗃

Flexible Storage

Run locally with SQLite for full privacy, or deploy to the cloud with Firestore. Same MCP interface either way.

Three tools. Semantic search. Works with any MCP-compatible AI client.

6 Ways AI Memory Helps You

Real examples of what SuperMemory does for your AI assistant.

🛠️
01

Remember Your Preferences

Preference

Your AI learns your coding style, tool preferences, and workflow habits once — and remembers them forever.

STORE User says “I prefer TypeScript with strict mode”
EMBED Preference stored with semantic embedding
RETRIEVE Next session: AI searches for coding preferences
APPLY Generates code using your preferred style
📚
02

Build on Past Conversations

Context

Pick up exactly where you left off. Your AI recalls decisions, discussions, and context from previous sessions.

STORE Save “decided to use Postgres for auth service”
TAG Tagged with: architecture, auth-service
RETRIEVE Days later: “What database did we choose?”
RECALL Returns decision with full context
🎓
03

Store Learned Skills

SkillProcedure

Teach your AI a procedure once — deploy scripts, debugging workflows, build steps — and it remembers how.

STORE Save deployment procedure: build, test, deploy
TAG Tagged: skill, deployment, staging
RETRIEVE “How do I deploy to staging?”
EXECUTE AI follows stored procedure step by step
🗺️
04

Track Project Context

FactContext

Architecture decisions, file conventions, API patterns — your AI keeps a living knowledge base of your project.

STORE Save API naming convention: /api/v1/{resource}
STORE Save “frontend uses React 19 with RSC”
RETRIEVE “Create a new users endpoint”
GENERATE Follows project conventions automatically
🔧
05

Cross-Session Debugging

SkillFact

Remember past bugs, solutions, and workarounds. Your AI never solves the same problem twice from scratch.

STORE Save “CORS fix: add credentials: include”
EMBED Indexed with semantic meaning of the fix
RETRIEVE Weeks later: “Getting CORS errors again”
FIX Instantly recalls the exact solution
📖
06

Personal Knowledge Base

Fact

Store research, meeting notes, reference material — anything you want your AI to know without re-explaining.

STORE Save meeting notes, API docs, team decisions
STORE Save research findings and benchmarks
RETRIEVE “What did the team decide about caching?”
ANSWER Surfaces relevant knowledge across all memories

Ready to try it?

Add persistent memory to your AI in under a minute.

Your Data, Protected

Privacy-first architecture with EU compliance built in.

EU Flag
EU Hosted Frankfurt, Germany
GDPR Compliant
GDPR Compliant Full compliance
🗑️
Full Control Delete anytime
🔒
Encrypted At rest & in transit

All cloud infrastructure is hosted in Frankfurt, Germany. Your memories never leave the EU. You retain full control of your data—delete any memory or everything at any time. Or run locally with SQLite for complete air-gapped privacy.

Setup Guide

🧠

How SuperMemory Works

SuperMemory is an MCP server that gives any AI assistant persistent semantic memory. It runs alongside your AI client and provides three tools:

  1. store_memory — Save any text with optional tags. The server generates vector embeddings automatically.
  2. retrieve_memory — Search by meaning (semantic search) or by ID. Filter by tag. Results are reranked for precision.
  3. delete_memory — Remove outdated memories to keep your knowledge base clean.

Your AI learns to use these tools naturally. Say "remember this for next time" and it stores a memory. Ask about something from weeks ago and it retrieves the relevant context.

Local vs. Cloud

Local (stdio): Run with npx — memories stored in SQLite on your machine. Fully private, no cloud needed. Requires an OpenAI or Gemini API key for embeddings.

Cloud (HTTP): Connect to the hosted server — memories stored in Firestore. Available across all sessions and devices.

Claude

Claude Desktop

Works with any Claude Desktop installation.

  1. Open Claude Desktop → Settings → Developer → Edit Config
  2. Add this to claude_desktop_config.json:
{
  "mcpServers": {
    "supermemory": {
      "command": "npx",
      "args": ["@nicepkg/supermemory"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}
  1. Replace sk-... with your OpenAI API key (for embeddings)
  2. Restart Claude Desktop
Use Gemini embeddings instead

Replace the env block with:

"EMBEDDING_PROVIDER": "gemini", "GEMINI_API_KEY": "your-key"
Claude

Claude (Web)

Requires Claude Pro or Team subscription.

  1. Go to claude.ai → Settings → Integrations
  2. Click Add Integration
  3. Enter this URL:
https://mcp.supermemory.13afoundry.com/mcp
Connect to Claude

The hosted server uses Firestore for cloud-persistent storage. Your memories are available across all sessions.

OpenAI

ChatGPT

Requires ChatGPT Pro, Business, Enterprise, or Edu plan. Web only.

  1. Enable Developer Mode:
    • Go to Settings → Apps → Advanced Settings
    • Turn on Developer Mode
  2. Click the button below to create an app
  3. Fill in these details:
  • Name: SuperMemory
  • MCP Server URL:
https://mcp.supermemory.13afoundry.com/mcp
  • Authentication: None
  1. Click "I understand" on the safety warning
Open ChatGPT Connectors
Troubleshooting

Can't find Developer Mode?

Only workspace admins can enable it. Ask your admin to enable it in Workspace Settings → Permissions & Roles → Connected Data Developer mode.

Safety warning?

It's normal to see "OpenAI hasn't reviewed this MCP server". Click "I understand" to continue.

Cursor

Cursor

Works with any Cursor plan.

  1. Open your MCP config file:
    • Mac: ~/.cursor/mcp.json
    • Windows: %USERPROFILE%\.cursor\mcp.json
  2. Add this configuration (or create the file if it doesn't exist):
{
  "mcpServers": {
    "supermemory": {
      "command": "npx",
      "args": ["@nicepkg/supermemory"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}
  1. Replace sk-... with your OpenAI API key
  2. Restart Cursor
Troubleshooting

MCP not showing up?

  1. Make sure the config file is valid JSON (no trailing commas)
  2. Fully restart Cursor (quit and reopen)
  3. Check that npx is available (Node.js installed)

Need Node.js?

Download from nodejs.org (LTS version recommended)

Gemini

Gemini CLI

Setup Instructions
  1. Install Gemini CLI
  2. Edit ~/.gemini/settings.json
  3. Add this configuration:
{
  "mcpServers": {
    "supermemory": {
      "command": "npx",
      "args": ["-y", "mcp-remote", "https://mcp.supermemory.13afoundry.com/mcp"]
    }
  }
}
  1. Run /mcp in Gemini CLI to connect
MCP

Other Apps

Generic Setup

SuperMemory works with any MCP-compatible app.

For local use (stdio):

{
  "mcpServers": {
    "supermemory": {
      "command": "npx",
      "args": ["@nicepkg/supermemory"],
      "env": {
        "OPENAI_API_KEY": "sk-..."
      }
    }
  }
}

For cloud use (HTTP):

  • Server URL: https://mcp.supermemory.13afoundry.com/mcp
  • Transport: Streamable HTTP