Get Started
Docs Production & Operations

Last updated: 3/7/2026

Production & Operations

How do I prevent my AI from storing hallucinated or incorrect memories?

Memory contamination — storing factually incorrect or hallucinated information as if it were true — is a serious failure mode. Once a wrong fact is stored, it gets retrieved and injected into future prompts, biasing responses. Over time, a contaminated memory store meaningfully degrades agent reliability.

How incorrect memories get stored

Extraction errors: If a user says 'let's assume for this example that I have a budget of $10K,' the extraction model might store 'User has a $10K budget' as a real fact rather than a hypothetical.

Hallucinated assertions: An assistant that confidently says 'Based on your previous preference for React...' might have its own assertion stored as if the user said it.

Outdated information stored as current: 'User works at Acme Corp' stored in year one, never updated after the user changed jobs in year two.

Defense 1: Source-attribution in extraction

When extracting facts, require the extraction model to attribute each fact to a direct user statement. If the fact cannot be traced to a literal user utterance, discard it.

EXTRACTION_PROMPT = """
Extract facts from the conversation below.

Rules:
- Only extract facts that the USER explicitly stated
- Do not infer facts from what the assistant said
- Do not store hypotheticals, examples, or conditional statements

Conversation:
{conversation}
"""

Defense 2: Confidence scoring

Require the extraction model to output a confidence score for each candidate memory. Only store memories above a threshold — typically 0.8+. Ambiguous statements, partial information, apparent hypotheticals should be discarded rather than stored with uncertainty.

Defense 3: Memory transparency for users

The most robust defense is giving users visibility into what has been stored. Users who can view and correct their memories act as a quality gate that no automated system can fully replicate.

# Surface memories for user review
stored = memory.get_all(user_id="alice")
for m in stored["results"]:
    print(f"ID: {m['id']} | Memory: {m['memory']}")

# Let users delete incorrect entries
memory.delete(memory_id="mem_incorrect_entry")

Defense 4: Audit for injection-style entries

# Flag memories that look like instructions rather than facts
all_memories = memory.get_all(user_id="suspect_user")
injection_patterns = ["remember to", "always", "never", "you must", "admin", "override"]
for m in all_memories["results"]:
    if any(p in m["memory"].lower() for p in injection_patterns):
        print(f"Suspicious: {m['memory']}")
        memory.delete(m["id"])

Ready to add memory to your AI?

Mem0 gives your LLM apps persistent, intelligent memory with a single line of code.

Get Started with Mem0 →