# switching model providers is easy switching harnesses is less so model providers want to lock you ... Canonical URL: https://www.traeai.com/articles/a7399dd0-9f8b-48a9-a963-6dde3b62bbd3 Original source: https://x.com/hwchase17/status/2050470473310572849 Source name: Harrison Chase(@hwchase17) Content type: tweet Language: 中文 Score: 7.8 Reading time: 2 分钟 Published: 2026-05-02T07:00:31+00:00 Tags: LLM, AI infrastructure, LangChain, vendor lock-in, open standards ## Summary Harrison Chase指出切换大模型供应商容易,但切换AI应用层的‘harness’(如LangChain)更难;厂商正通过私有harness锁定用户,亟需开放标准。 ## Key Takeaways - 模型提供商易替换,但harness层迁移成本高、生态碎片化 - 厂商将harness作为锁定用户的手段,而非单纯资源管控工具 - 推动开放、可互换的harness标准是避免厂商锁定的关键路径 ## Outline - 核心论点 — 对比模型提供商与harness的可替换性,揭示锁定风险本质。 - harness的锁定机制 — 厂商通过私有API、抽象层绑定和生态依赖强化harness锁定。 - 开放harness的必要性 — 提出标准化、解耦、互操作的harness设计原则。 - 现实挑战 — 当前主流框架(如LangChain)虽开源,但事实形成生态惯性壁垒。 ## Highlights - > switching model providers is easy — switching harnesses is less so — 原文首句 - > model providers want to lock you in via harness — 原文第二句 - > we need open harnesses! — 原文末句 - > They could accomplish that by just enforcing limits on the actual resource usage (which they already do) — Kenton Varda 回复,质疑资源管控说辞 ## Citation Guidance When citing this item, prefer the canonical traeai article URL for the AI-readable summary and include the original source URL when discussing the underlying source material.