Simple memory API for AI agents. Pay with USDC or Stripe.
30 seconds to first API call. No signup, no friction.
30 seconds to first API call â that's 60x faster than self-hosted solutions. No Docker. No git. No OpenAI keys. Just curl and go.
ââââââââââââââââ
â Your Agent â
â (Any language)â
ââââââââ¬ââââââââ
â HTTP PUT/GET/DELETE
âŒ
ââââââââââââââââ
â AgentMem API â â 107ms avg latency
â (Cloudflare) â â 99.9% uptime
ââââââââ¬ââââââââ
â Store/Retrieve
âŒ
ââââââââââââââââ
â Cloudflare KVâ â Global edge network
â (Your data) â â Encrypted at rest
ââââââââââââââââ
ð¡ This is a real API call. Connect wallet or get API key for your own namespace.
AgentMem is for context that spans sessions. Use these types to organize your memory.
Everything your AI needs. Nothing it doesn't.
Three endpoints. PUT, GET, DELETE. No SDK required. Just HTTP.
Pay with USDC on Base. No signup, no API key. Your wallet is your identity.
Cloudflare's network. Sub-50ms latency worldwide.
Built for AI agents. Works with OpenClaw, AutoGPT, LangChain, CrewAI.
Monitor storage and operations via /v1/status. No surprise bills.
Export all your data as JSON via GET /v1/export. Switch providers anytime.
Other memory solutions have 6 layers, 5 dependencies, and PhD-level setup.
We have 3 endpoints. That's it.
AgentMem vs Self-Hosted Memory Solutions
The simplicity moat: We're the ONLY memory solution that's
zero-dependency, one-line setup, and crypto-native.
Three lines of code. Zero setup.
âââââââââââ PUT /kv/:key ââââââââââââ Encrypted ââââââââââââ
â Agent ââââââââââââââââââââââ¶â AgentMem âââââââââââââââââ¶â Storage â
âââââââââââ ââââââââââââ ââââââââââââ
â â²
â GET /kv/:key â
ââââââââââââââââââââââââââââââââ
curl -X PUT https://api.agentmem.io/v1/kv/user_prefs \
-H "Authorization: Bearer YOUR_API_KEY" \
-d "dark_mode"
curl https://api.agentmem.io/v1/kv/user_prefs \
-H "Authorization: Bearer YOUR_API_KEY"
curl -X DELETE https://api.agentmem.io/v1/kv/user_prefs \
-H "Authorization: Bearer YOUR_API_KEY"
How memory should work in agent systems
Session memory is ephemeral. Daily logs are operational. AgentMem is your source of truth.
Example workflow:
PUT /kv/user_prefs)daily/2026-02-05.mdPUT /kv/decisions/2026-02-05-01)ð¡ Inspired by: OpenClaw-Mem's three-layer architecture (ephemeral â operational â durable)
Built for production. Measured, not promised.
GET /v1/health for service status
GET /v1/analytics for usage metrics
AgentMem adds minimal overhead to your token budget. Here's the math:
ð¡ Pro tip: Use short keys and store only essential data to minimize token usage
Know your constraints before you hit them
Retry-After header
Your agent should call AgentMem when the user says...
Get all your agent's context in a single API call. Perfect for session startup.
curl https://api.agentmem.io/v1/bootstrap \
-H "Authorization: Bearer YOUR_API_KEY"
# Returns: identity + all memories + stats in ONE call
These are common misuses. Avoid them to keep your memory fast and clean.
Don't dump full conversation logs into AgentMem. They bloat context windows and rarely get retrieved.
PUT chat:transcript:20260203 = "User: Hi\nAgent: Hello\n..."
PUT user:decision:plan = "Chose Pro plan on 2026-02-03"
AgentMem is for knowledge that persists across sessions, not debugging logs or transient state.
PUT debug:api_call_1234 = "Status 200, latency 45ms"
PUT workflow:last_run = "2026-02-03T14:30:00Z"
Never store API keys, passwords, or tokens. Use a proper secrets vault (Doppler, 1Password, AWS Secrets Manager).
PUT api:openai_key = "sk-..."
PUT config:openai = {"model": "gpt-4", "max_tokens": 1000}
Don't store large files, images, or binary data. Use S3, Cloudflare R2, or similar object storage.
PUT file:avatar = "<5MB base64 blob>"
PUT user:avatar_url = "https://s3.amazonaws.com/..."
Don't overwrite the same key repeatedly without checking if it changed. Wastes API calls and burns your quota.
PUT user:theme = "dark" // every request, even if unchanged
if (theme !== lastTheme) PUT user:theme = theme
Follow these patterns to get the most out of AgentMem
Namespace your keys to organize memories by type. Makes retrieval faster and prevents collisions.
user:pref:theme
decision:plan_choice:20260203
episodic:2026_02_03:event_name
semantic:topic:fact_name
procedural:workflow:step_name
Don't load all memories at startup. Fetch only what's relevant to the current task.
keys = GET /v1/keys; for k in keys: GET /v1/keys/{k}
1. GET /v1/bootstrap (identity + core context)
2. Only fetch user:* keys when user asks a question
Use key prefixes to signal importance. Helps agents prioritize what to load first.
critical:rule:no_personal_data
high:decision:architecture_choice
normal:fact:topic_name
low:temp:cache_entry
Load memories in order of importance: critical rules first, contextual facts second, nice-to-haves last.
1. GET critical:* (always load these)
2. GET high:decision:* (if user asks about decisions)
3. GET user:pref:* (when user interacts)
4. Skip low:* unless explicitly needed
Periodically merge related memories, archive outdated ones, and delete duplicates to keep your store clean.
⢠Merge duplicate facts into one canonical key
⢠DELETE episodic:* older than 30 days (if not needed)
⢠Archive low:* to external storage if unused
Not everything belongs in long-term memory. Here's what to skip:
When to call AgentMem during your agent's lifecycle
1. GET /v1/bootstrap â Load identity + critical context
2. GET /v1/keys?prefix=critical: â Load never-forget rules
3. GET /v1/keys?prefix=user:pref: â Load user preferences
Why: Initialize with essential context so the agent doesn't ask for preferences every time.
User says "Remember this" â PUT /v1/user:note:{timestamp}
User says "I prefer X" â PUT /v1/user:pref:{topic}
User asks "What did I..." â GET /v1/keys?prefix=episodic:
User says "Forget that" â DELETE /v1/{key}
Why: Store facts in real-time so they're available in future sessions.
PUT /v1/decision:{topic}:{date} â "User chose plan B over A"
PUT /v1/lesson:{topic} â "Approach X worked better than Y"
PUT /v1/episodic:{date}:{task} â "Completed migration successfully"
Why: Capture decisions and learnings for future reference ("Why did we pick this?").
PUT /v1/mistake:{topic} â "API call failed because X"
PUT /v1/lesson:{topic} â "Always check Y before doing Z"
Why: Don't repeat mistakes. Learn from failures.
1. Extract durable facts from conversation â PUT /v1/semantic:{topic}
2. Record any lessons learned â PUT /v1/lesson:{topic}
3. Update entity information (if tracking) â PUT /v1/entity:{name}
Why: Consolidate ephemeral chat into durable memories.
## Memory Protocol (AgentMem)
On session start:
1. GET /v1/bootstrap (identity + core context)
2. GET /v1/keys?prefix=critical: (never-forget rules)
3. GET /v1/keys?prefix=user:pref: (user preferences)
During conversation:
- User says "remember this" â PUT immediately
- User says "I prefer X" â PUT as user:pref:{topic}
- User asks "what did I..." â GET episodic:*
On session end:
1. Extract durable facts â PUT semantic:{topic}
2. Record lessons learned â PUT lesson:{topic}
3. Consolidate decisions â PUT decision:{topic}
One-liner convenience wrappers for common operations
Download and install the amem helper script for easier CLI usage:
curl -o ~/bin/amem https://agentmem.io/scripts/amem && chmod +x ~/bin/amem
ð¡ Set your API key: export AGENTMEM_API_KEY="your-key"
amem set "user_name" "Alice"amem set "user:pref:theme" "dark"
amem get "user_name"# Output: Alice
amem search "user_*"amem search "user:pref:*"
amem delete "user_name"
amem bootstrap# Returns: identity, critical:*, user:pref:*
#!/bin/bash
# AgentMem Helper Script
# Usage: amem [set|get|search|delete|bootstrap] [args...]
API_KEY="${AGENTMEM_API_KEY:-}"
BASE_URL="https://api.agentmem.io/v1"
if [ -z "$API_KEY" ]; then
echo "Error: AGENTMEM_API_KEY not set"
exit 1
fi
case "$1" in
set)
curl -X PUT "$BASE_URL/$2" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: text/plain" \
-d "$3"
;;
get)
curl -X GET "$BASE_URL/$2" \
-H "Authorization: Bearer $API_KEY"
;;
search)
curl -X GET "$BASE_URL/keys?prefix=$2" \
-H "Authorization: Bearer $API_KEY"
;;
delete)
curl -X DELETE "$BASE_URL/$2" \
-H "Authorization: Bearer $API_KEY"
;;
bootstrap)
curl -X GET "$BASE_URL/bootstrap" \
-H "Authorization: Bearer $API_KEY"
;;
*)
echo "Usage: amem [set|get|search|delete|bootstrap] [args...]"
exit 1
;;
esac
Trigger words and patterns that signal memory operations
| User Says | Action |
|---|---|
| "Remember this" | PUT /v1/kv/note-{timestamp} |
| "Don't forget X" | PUT /v1/kv/important-{topic} |
| "I prefer..." | PUT /v1/kv/user-prefs |
| "Store this config" | PUT /v1/kv/config |
| "What did I..." | GET /v1/kv/{topic} |
| "Forget about X" | DELETE /v1/kv/{topic} |
General Rule:
If it's worth remembering for >1 hour, store it in AgentMem. Use session memory for temporary state.
How to retrieve memories efficiently without bloating your context window
Don't dump all memories into context. Retrieve selectively â load only what's needed for the current task. Fewer tokens = faster responses + lower costs.
GET /v1/keys?prefix=user:pref: â Returns ["user:pref:theme", "user:pref:language"]
Why: List all keys in a namespace first. Pick the 1-3 most relevant keys before fetching values.
user:pref:ui:theme) to narrow down quickly.
â Bad: GET all keys and dump into context
â
Good: GET user:pref:theme â "dark" (only 4 bytes)
Why: Most tasks only need 1-3 memories. Load the minimum required data.
decision:{topic}), not the entire reasoning chain.
On session start: GET /v1/bootstrap â cache identity + core prefs
During session: Refer to cached values, don't re-fetch
Why: Reduce API calls. Load critical context once at session start, reuse throughout.
/v1/bootstrap to load identity + all critical context in ONE call.
â Bad: Fetch episodic:* and scan all events
â
Good: "Do you remember when you last...?" â targeted GET
Why: If you don't know what to retrieve, ask the user instead of scanning everything.
DELETE /v1/session:2026-01-01 â Remove old session data after 30 days
Why: Dead keys clutter your namespace. Periodically delete stale data (old sessions, expired cache, etc.).
cache:{date}) so you can DELETE by prefix when they expire.
Make AgentMem easier to use from the command line
For easier usage, create ~/bin/agentmem:
#!/bin/bash
# AgentMem CLI Helper
KEY="${AGENTMEM_KEY:?Set AGENTMEM_KEY env var}"
ACTION=$1
KEYNAME=$2
shift 2
case $ACTION in
put)
echo "$*" | curl -X PUT "https://api.agentmem.io/v1/kv/$KEYNAME" \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: application/text" \
--data-binary @-
;;
get)
curl -s "https://api.agentmem.io/v1/kv/$KEYNAME" \
-H "Authorization: Bearer $KEY"
;;
delete)
curl -X DELETE "https://api.agentmem.io/v1/kv/$KEYNAME" \
-H "Authorization: Bearer $KEY"
;;
*)
echo "Usage: agentmem [put|get|delete] <key> [value]"
exit 1
;;
esac
Install:
chmod +x ~/bin/agentmem
export AGENTMEM_KEY="your_api_key_here"
Usage:
agentmem put user-prefs "dark mode preferred"
agentmem get user-prefs
agentmem delete user-prefs
Natural language patterns that should trigger memory storage
Silent operation: When users say these phrases, store immediately and continue â don't announce "I'll remember that!" Just do it.
PUT /v1/user:pref:communication â {"style": "direct, no fluff"}
PUT /v1/user:identity â {"name": "Alex", "timezone": "PST"}
PUT /v1/user:note:2026-02-05 â {"text": "API keys in 1Password"}
PUT /v1/decision:tech-stack â {"choice": "PostgreSQL", "reason": "team expertise"}
PUT /v1/lesson:deployment â {"learned": "Always run migrations before code"}
DELETE /v1/{key} (use prefix search to find the right key)
import re
import requests
TRIGGER_PATTERNS = {
'preference': r'I (prefer|like|want|hate)',
'identity': r'My (name|timezone|email) is',
'explicit': r'(remember|don\'t forget|note that)',
'decision': r'(we decided|I chose|let\'s go with)',
'lesson': r'(worked better|next time|what failed)',
'delete': r'(forget that|delete|remove|clear)'
}
def detect_trigger(user_message):
for category, pattern in TRIGGER_PATTERNS.items():
if re.search(pattern, user_message, re.IGNORECASE):
return category
return None
def auto_capture(user_message, api_key):
trigger = detect_trigger(user_message)
if not trigger:
return # No trigger, skip
# Extract key/value (simplified - use NER in production)
key = f"{trigger}:{int(time.time())}"
value = {"text": user_message, "category": trigger}
# Store silently
requests.put(
f"https://api.agentmem.io/v1/{key}",
headers={"Authorization": f"Bearer {api_key}"},
json=value
)
user:pref:, decision:, etc.Real-world use cases from the agent economy
Remember user preferences, past decisions, and ongoing tasks. Your agent gets smarter over time, not goldfish-brained every conversation.
user:pref:communication_style
user:decision:tech_stack
user:task:project_status
Share context across specialized agents. One agent learns, all agents benefit. Perfect for agent swarms and delegation workflows.
team:learnings:deployment_patterns
team:rules:code_standards
team:facts:api_endpoints
Customer support agents that remember prior tickets, preferences, and history. No more "let me look that up" â instant context retrieval.
customer:123:history
customer:123:preferences
customer:123:last_interaction
Pause and resume complex workflows. Your agent can checkpoint progress, survive restarts, and pick up where it left off.
workflow:deploy:step
workflow:deploy:state
workflow:deploy:rollback_point
All use cases share one pattern: Your agent stores context while working, retrieves it when needed. Simple.
AgentMem complements your current setup â it doesn't replace it
Use AgentMem for: Structured key-value data (preferences, config, session state)
Use MEMORY.md for: Long-form notes, journal entries, human-readable logs
Use AgentMem for: Simple facts, configuration, exact-match lookups
Use Vector DBs for: Semantic search, similarity matching, large corpora
Use AgentMem for: Live data that changes frequently (session state, prefs)
Use Git for: Versioned history, audit trails, collaborative editing
Use AgentMem for: Persistent storage that survives restarts
Use Redis for: High-speed cache, pub/sub, ephemeral data (<1 hour)
Rule of Thumb:
If it needs to survive restarts â AgentMem
If it's <1 hour lifespan â Session memory
AgentMem is key-value only. No SQL, no schemas, no migrations, no setup. Just PUT and GET. Perfect for agent memory that doesn't need relational queries.
Writes fail with a 402 error. Your existing data stays accessible. Upgrade or delete old keys to continue writing.
Yes! Change your plan anytime via the Stripe portal or crypto payment. Prorated billing for paid plans.
Yes. Encrypted at rest (AES-256) and in transit (TLS 1.3). Only you have access to your keys.
99.9% uptime target. Built on Cloudflare Workers (globally distributed, edge-deployed worldwide).
Yes! GET /v1/export returns all your keys as JSON. You can switch providers anytime.
Pay with USDC on Base (Ethereum L2). Connect your wallet, approve the payment, and your API key activates instantly. No email, no signup.
For Stripe users: Log in via your email link. For crypto users: Your wallet address is your account â check the blockchain for your payment tx, then contact support.
Honest about what we don't do (yet)
Vote on features: Open an issue on GitHub or email support@agentmem.io. We ship based on user demand.
Learn from others' mistakes
{"error": "503 Service Unavailable"}{"github_token": "ghp_abc123..."}Rule of thumb: If it would still be useful next week, it's worth storing. If it's only relevant right now, skip it.
See how much you save with on-demand memory
The math: Traditional memory loads 3,500 tokens of context (6% relevant, 94% noise). AgentMem lets you fetch only the 200 tokens you need on-demand. That's 17.5Ã more efficient.
Understand where AgentMem fits in your agent's memory architecture
GET /v1/export to download your data anytime (GDPR-compliant).
Key principle: Working memory is for thinking. AgentMem is for remembering. Local backup is for never losing data.
Structure your memories for powerful retrieval
AgentMem is key-value only â no built-in tagging. But you can use key prefixes and JSON metadata to create your own tagging system.
# Organize by type
user:preferences:language
user:preferences:timezone
project:agentmem:api_key
project:agentmem:last_deploy
# Query by prefix
GET /v1/keys?prefix=user:preferences:
â Returns all user preferences
GET /v1/keys?prefix=project:
â Returns all project-related memories
{
"key": "decision:2026-02-05:use-agentmem",
"value": {
"type": "decision",
"priority": "high",
"category": "architecture",
"decision": "Use AgentMem for persistent memory",
"reason": "Simple API, cloud-native, pay-as-you-go",
"updated": "2026-02-05T03:58:00Z",
"tags": ["memory", "architecture", "cloud"]
}
}
Since AgentMem doesn't have built-in search (yet), fetch keys by prefix and filter client-side:
# Fetch all memories
memories = api.get('/v1/keys?prefix=memory:')
# Filter by tag in JSON value
high_priority = [m for m in memories
if m['value']['priority'] == 'high']
# Filter by date
recent = [m for m in memories
if m['value']['updated'] > '2026-02-01']
Your data, your control
DELETE /v1/keys?prefix=* to wipe all data. Export first with GET /v1/export.
Security roadmap: SOC 2 compliance (Q2 2026), client-side encryption (Q3 2026), multi-region replication (Q4 2026). Vote on priorities via email.
AgentMem is cloud-first by design â here's why that matters
Access your memories from anywhere â laptop, phone, cloud server. Local-only solutions lock you to one machine.
30 seconds from signup to first API call. No Docker, no config files, no dependency hell.
Your data is replicated across Cloudflare's global network. No manual backups, no data loss from disk failures.
10,000 free keys, then $5/mo for 1M. Local solutions need dedicated servers, even when idle.
We handle updates, scaling, monitoring. You focus on building agents, not managing infrastructure.
HTTPS encryption, GDPR compliance, no training on your data. Get privacy without running your own infrastructure.
When to choose local: If you have strict data residency requirements, or need complete air-gap isolation, local-first solutions like elite-longterm-memory or openclaw-mem are better fits. AgentMem prioritizes speed + convenience + multi-device access over maximum control.
Pay with card or crypto â your choice
For getting started
For serious agents
Beyond limits
No commitment
100,000 credits
500,000 credits
Common errors and how to fix them â fast
Still stuck? All API errors include "hint" and "docs" fields. Check the error response first â it usually tells you how to fix it.
{"error": "Invalid API key", "hint": "Check Authorization header format"}
Cause: Missing or incorrect API key.
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.agentmem.io/v1/test-key
{"error": "Key not found", "hint": "Check key name for typos"}
Cause: Key doesn't exist, or you're GETting before PUTting.
GET /v1/keys?prefix=your:prefix to list all keys{"error": "Rate limit exceeded", "hint": "Wait 60s or upgrade plan"}
Cause: Too many API calls in a short time window.
GET /v1/keys?prefix=* instead of individual GETs{"error": "Value exceeds max size (1MB)", "hint": "Split into smaller chunks"}
Cause: Single value exceeds 1MB limit.
// Split large values into chunks
PUT /v1/data:part1 â {...}
PUT /v1/data:part2 â {...}
PUT /v1/data:index â ["data:part1", "data:part2"]
Error: ETIMEDOUT / Read timed out
Cause: Network issue or API temporarily unavailable.
// Implement retry logic with exponential backoff
max_retries = 3
for attempt in range(max_retries):
time.sleep(2 ** attempt) # 1s, 2s, 4s
Test your API connection and key validity:
curl -H "Authorization: Bearer YOUR_API_KEY" \
https://api.agentmem.io/v1/health
{"status": "ok", "latency_ms": 12, "tier": "free"}
Still having issues? Email support@agentmem.io with your API key (last 4 chars) and error details. We respond within 24h.
Real numbers. Not marketing fluff.
Your data. Your wallet. Your control.
All data encrypted with your API key. We can't read your memories â only you can.
Cloud-hosted by design. Your agent has memory anywhere â laptop, phone, server.
Export all your data anytime (GET /v1/export). Delete on demand. Full transparency.
Pay with USDC â no email, no signup forms, no KYC. Your wallet is your account.
Questions about security? security@agentmem.io
AgentMem saves tokens by fetching memories on-demand instead of loading everything at startup
Where AgentMem fits in your agent's memory architecture
GET /v1/bootstrap â loads identity + core context into working memory
GET /v1/kv/:key) when user asks about a topic
PUT /v1/kv/:key) immediately to AgentMem
Migration in 3 steps. No data loss.
Use your current DB's export tool. Save as JSON or CSV.
Use the bulk upload script or loop through your data:
# Bash example
for key in $(jq -r '.[] | .key' data.json); do
value=$(jq -r --arg k "$key" '.[] | select(.key==$k) | .value' data.json)
curl -X PUT "https://api.agentmem.io/v1/kv/$key" \
-H "Authorization: Bearer YOUR_KEY" \
-d "$value"
done
Replace your DB client with AgentMem SDK (npm install agentmem). Drop-in replacement for most use cases.
Need migration help? support@agentmem.io (we respond fast)
Cron jobs, heartbeats, and framework examples
Set up a monthly cleanup job to prune old temporary keys and keep your namespace lean:
cron action=add job='{
"name": "memory-cleanup",
"schedule": { "kind": "cron", "expr": "0 4 1 * *" },
"payload": {
"kind": "agentTurn",
"message": "Clean up AgentMem: 1) GET /v1/keys?prefix=temp: 2) DELETE keys older than 30d 3) Log completion"
},
"sessionTarget": "isolated"
}'
Runs monthly at 4 AM, removes stale temp:* keys, keeps your memory fresh.
Load critical memories during heartbeat polls to keep context fresh:
# In HEARTBEAT.md ## Memory Sync (every 30 min) 1. GET /v1/bootstrap (identity + core context) 2. If user:* keys updated_at > last_heartbeat: - Fetch new user:pref:* keys - Update local context 3. Log sync completion
Keeps your agent in sync with user preferences without explicit fetches.
from langchain.memory import ConversationBufferMemory
import requests
class AgentMemMemory(ConversationBufferMemory):
def save_context(self, inputs, outputs):
r = requests.put(
f"https://api.agentmem.io/v1/chat:history",
headers={"Authorization": f"Bearer {api_key}"},
data=f"{inputs}\n{outputs}"
)
def load_memory_variables(self):
r = requests.get(
"https://api.agentmem.io/v1/chat:history",
headers={"Authorization": f"Bearer {api_key}"}
)
return {"history": r.text}
// In AutoGPT config.yaml: memory: type: "agentmem" api_key: "your_api_key" base_url: "https://api.agentmem.io/v1" # Agent will store goals, plans, learnings: # - goals:current # - plan:steps # - learnings:mistakes # - learnings:successes
# Install skill: clawdhub install agentmem
# In AGENTS.md:
## Memory Persistence
Before responding, GET /v1/bootstrap for:
- user:pref:* (preferences)
- user:decision:* (past decisions)
- critical:rule:* (never-forget rules)
After learning something:
PUT /v1/learnings:{topic} "what I learned"
# Bootstrap (get identity + core context) curl -H "Authorization: Bearer $KEY" \ https://api.agentmem.io/v1/bootstrap # Store a preference curl -X PUT \ -H "Authorization: Bearer $KEY" \ -H "Content-Type: text/plain" \ -d "dark mode" \ https://api.agentmem.io/v1/user:pref:theme
Need a full integration guide? Open an issue on GitHub and we'll write it.
Switching from local files, LanceDB, or other memory systems
If you currently store memories in MEMORY.md or daily log files:
# 1. Extract key facts from MEMORY.md
grep "^- " MEMORY.md > facts.txt
# 2. Upload to AgentMem with structured keys
while read line; do
key="semantic:$(echo $line | md5sum | cut -c1-8)"
curl -X PUT "https://api.agentmem.io/v1/$key" \
-H "Authorization: Bearer $API_KEY" \
-d "$line"
done < facts.txt
# 3. Keep MEMORY.md as canonical source (for now)
# 4. Gradually shift to AgentMem as primary
Keep local backups for 30 days while you validate AgentMem works for your workflow.
If you use vector search (LanceDB, ChromaDB, Pinecone):
# Python migration script
import lancedb
import requests
db = lancedb.connect("~/.agent-memory/lance")
table = db.open_table("memories")
API_KEY = "your_api_key"
BASE = "https://api.agentmem.io/v1"
# Export all vectors to AgentMem
for row in table.to_pandas().itertuples():
key = f"semantic:{row.topic}:{row.id}"
requests.put(f"{BASE}/{key}",
headers={"Authorization": f"Bearer {API_KEY}"},
data=row.text
)
print("Migration complete! Test AgentMem, then archive LanceDB.")
AgentMem doesn't have vector search yet (coming Q2 2026). For now, use prefix-based keys and bootstrap endpoint for retrieval. If you need vector search, keep LanceDB running in parallel until we ship it.
If you use Git-backed memory stores (episodic, semantic, procedural):
# Sync episodic memories
for file in memory/episodes/*.md; do
date=$(basename $file .md)
curl -X PUT "https://api.agentmem.io/v1/episodic:$date" \
-H "Authorization: Bearer $API_KEY" \
--data-binary @$file
done
# Sync semantic graph
for entity in memory/graph/entities/*.md; do
name=$(basename $entity .md)
curl -X PUT "https://api.agentmem.io/v1/entity:$name" \
-H "Authorization: Bearer $API_KEY" \
--data-binary @$entity
done
# Sync procedural workflows
for workflow in memory/procedures/*.md; do
name=$(basename $workflow .md)
curl -X PUT "https://api.agentmem.io/v1/workflow:$name" \
-H "Authorization: Bearer $API_KEY" \
--data-binary @$workflow
done
You don't have to migrate everything. Use AgentMem for what it's best at:
Best practice: Use AgentMem as your "hot" memory (frequently accessed), and keep local files as your "cold" archive (rarely accessed, but permanent).
Common issues and quick fixes
If you get a 401 error, check your Authorization header format:
Authorization: Bearer your_api_key_here
Common mistake: Missing "Bearer " prefix or extra whitespace.
Keys must exist before you can GET them. Use PUT to create:
curl -X PUT https://api.agentmem.io/v1/my-key \
-H "Authorization: Bearer $KEY" \
-d '{"value": "..."}'
Current limit: 10 requests/second per API key. Add a small delay between requests:
time.sleep(0.1) # Python
await new Promise(r => setTimeout(r, 100)) // JS
Common causes:
Content-Type: application/json # for JSON
Content-Type: text/plain # for text
If making requests from a browser:
Verify the API is responding and check your account status:
curl https://api.agentmem.io/v1/bootstrap -H "Authorization: Bearer YOUR_KEY"
Returns account limits, usage stats, and operational status.
If bulk migration fails:
Get started in 30 seconds. Try the playground above â no signup needed.