T
traeai
登录
返回首页
Weaviate • vector database(@weaviate_io)

Why would we build a memory product when memory.md already exists? 𝗠𝗘𝗠𝗢𝗥𝗬.𝗺𝗱 ...

7.5Score
Why would we build a memory product when memory.md already exists?

𝗠𝗘𝗠𝗢𝗥𝗬.𝗺𝗱 ...
AI 深度提炼
  • Engram设计用于扩展AI内存,保存结论背后的推理、被弃选项等,这些内容不适合永久存于内置MEMORY.md。
  • 记忆围绕语义主题构建,涵盖沟通风格、领域知识、工具偏好及工作流程,增强AI的上下文感知和个性化输出。
  • 系统通过战略触发点管理记忆使用,如会话开始时预加载上下文,确保LLM在处理信息前获得相关记忆。

结构提纲

按章节快速跳转。

  1. 提出为何需要额外构建记忆产品的问题。

  2. 说明MEMORY.md的功能局限,无法容纳所有推理链和变化过程。

  3. 详细介绍Engram如何围绕语义主题组织记忆,以及其运行机制。

  4. 列出Engram处理的记忆类型:沟通风格、领域知识、工具偏好、工作流程。

  5. 描述Engram在不同会话阶段如何利用触发点加载相关记忆。

  6. 强调需由系统决定何时使用记忆工具而非依赖LLM默认行为。

思维导图

用一张图看清主题之间的关系。

正在生成思维导图…
查看大纲文本(无障碍 / 无 JS 友好)
  • Engram 记忆系统
    • 内置MEMORY.md局限
      • 200行稳定事实
    • Engram核心功能
      • 语义主题记忆
      • 会话生命周期触发器
      • 记忆类型:沟通/领域/工具/工作流程
    • 实践洞察
      • 确定性触发机制需求

金句 / Highlights

值得收藏与分享的关键句。

#Weaviate#Engram#AI Database#Memory Management#Semantic Memory
打开原文

𝗠𝗘𝗠𝗢𝗥𝗬.𝗺𝗱 𝗿𝗲𝗺𝗲𝗺𝗯𝗲𝗿𝘀 𝘆𝗼𝘂𝗿 𝗰𝗼𝗻𝗰𝗹𝘂𝘀𝗶𝗼𝗻𝘀.

Engram remembers why you made them.

Claude Code's built-in MEMORY.md holds about 200 lines of stable facts: conclusions, preferences, and https://t.co/6KSlHU16Hh" / X

Post

Conversation

Why would we build a memory product when memory.md already exists? 𝗠𝗘𝗠𝗢𝗥𝗬.𝗺𝗱 𝗿𝗲𝗺𝗲𝗺𝗯𝗲𝗿𝘀 𝘆𝗼𝘂𝗿 𝗰𝗼𝗻𝗰𝗹𝘂𝘀𝗶𝗼𝗻𝘀. Engram remembers why you made them. Claude Code's built-in MEMORY.md holds about 200 lines of stable facts: conclusions, preferences, and decisions that don't change session to session. It loads automatically with zero overhead. But it can't hold everything that led to those conclusions. The reasoning chains, rejected alternatives, the session where the framing shifted doesn't fit in 200 lines and doesn't belong there permanently. That's where Engram comes in. Engram structures memory around semantic topics: • communication-style: output preferences, tone, what to avoid • domain-context: persistent role and product knowledge • tool-preferences: languages, frameworks, stack choices • workflow: how you prefer to work The workflow runs on strategic triggers: At session start, Engram recalls with a broad query to prime context before the first question rather than starting cold. During the session, saves fire on significant moments: a decision made, a direction changed, a deliverable produced. Lightweight saves every few prompts act as insurance against session clearing. At session end, a full summary gets saved. Mid-session recall only triggers on specific needs: cross-project references, decision archaeology, resumed work after a gap. The key insight from building this: letting the LLM decide when to use memory tools doesn't work. Claude defaults to what's already loaded. The integration needs deterministic, infrastructure-level triggers that fire at specific points in the session lifecycle, injecting relevant memories before Claude sees each message. Read the full blog post here: weaviate.io/blog/engram-in Or sign up for Engram preview access today! weaviate.io/product-previe

![Image 1: Image](https://x.com/weaviate_io/status/2049882132101603677/photo/1)

问问这篇内容

回答仅基于本篇材料
    0 / 500

    Skill 包

    领域模板,一键产出结构化笔记
    • 投融资雷达包

      把一条融资 / 创投新闻整理成投资人视角的雷达卡:交易要点、判断、竞争格局、风险、尽调清单。

      • · 交易要点(公司 / 轮次 / 金额 / 投资人 / 估值,材料未明示则写 “未披露”)
      • · 投资 thesis(这家公司为什么值得关注)
      • · 竞争格局与替代方案

    导出到第二大脑

    支持 Notion / Obsidian / Readwise
    下载 Markdown(Obsidian 直接拖入)