Get Started
Docs Edge Cases & Advanced

Last updated: 3/7/2026

Edge Cases & Advanced

How do I migrate stored memories when I switch from one LLM to another?

Switching LLM providers is a common event — cost changes, capability improvements, or vendor risk management all drive migrations. If your memory system uses the departing provider's embedding model, existing memories were embedded in one vector space and your new provider's retrieval model operates in a different one. Similarity search across mismatched embedding spaces produces unreliable results.

What actually needs to migrate

Text content: The extracted facts — 'User prefers Python,' 'User works at Stripe' — are provider-agnostic and migrate without modification.

Embeddings: The vector representations are model-specific. An embedding generated by text-embedding-ada-002 (OpenAI) cannot be compared against embeddings from text-embedding-004 (Google) or nomic-embed-text. All stored embeddings must be regenerated using the new provider's model.

Migration procedure

from mem0 import Memory

# Step 1: Export all memories from old configuration
old_memory = Memory()  # old provider config

exported = {}
for user_id in your_user_list:
    result = old_memory.get_all(user_id=user_id)
    exported[user_id] = [
        {"text": m["memory"], "metadata": m.get("metadata", {}), "created_at": m["created_at"]}
        for m in result["results"]
    ]

# Step 2: Configure new instance with new provider
new_config = {
    "embedder": {
        "provider": "google",
        "config": {"model": "text-embedding-004"}
    },
    "llm": {
        "provider": "google",
        "config": {"model": "gemini-2.0-flash"}
    }
}
new_memory = Memory.from_config(new_config)

# Step 3: Re-insert memories with new embeddings
for user_id, memories in exported.items():
    new_memory.delete_all(user_id=user_id)
    for m in memories:
        new_memory.add(
            [{"role": "assistant", "content": m["text"]}],
            user_id=user_id,
            metadata={**m["metadata"], "original_created_at": m["created_at"]}
        )
    print(f"Migrated {len(memories)} memories for {user_id}")

Zero-downtime migration

For live production systems, migrate with dual-write: during the migration window, write new memories to both old and new stores, and switch reads to the new store once migration is complete. This prevents gaps in memory coverage during the transition.

Provider-agnostic architecture with Mem0

Mem0 abstracts the embedding provider through its configuration layer — the memory API is identical regardless of which provider underlies it. Migrating between providers is a configuration change, not a code change. The migration work is re-embedding historical data, which is a one-time batch operation.

Ready to add memory to your AI?

Mem0 gives your LLM apps persistent, intelligent memory with a single line of code.

Get Started with Mem0 →