---
title: "A 7-million parameter model outperforming models a thousand times its size on tasks like ARC Prize. ..."
source_name: "Y Combinator(@ycombinator)"
original_url: "https://x.com/ycombinator/status/2050224443461718118"
canonical_url: "https://www.traeai.com/articles/d3e3e98d-c897-448c-b6f8-d9ddc1ca4ae8"
content_type: "tweet"
language: "中文"
score: 8.7
tags: ["AI","递归推理","HRM","TRM","ARC Prize"]
published_at: "2026-05-01T14:42:53+00:00"
created_at: "2026-05-02T12:31:36.130425+00:00"
---

# A 7-million parameter model outperforming models a thousand times its size on tasks like ARC Prize. ...

Canonical URL: https://www.traeai.com/articles/d3e3e98d-c897-448c-b6f8-d9ddc1ca4ae8
Original source: https://x.com/ycombinator/status/2050224443461718118

## Summary

YC解析HRM与TRM两类递归AI模型：仅700万参数的小模型在ARC Prize等推理任务上超越千倍参数大模型，关键在于推理时递归扩展计算深度。

## Key Takeaways

- 递归机制在推理阶段动态增加计算深度，突破标准LLM的固定上下文与浅层推理瓶颈。
- HRM通过外循环迭代调用基础模型实现多步推理，TRM则依赖训练中收敛的不动点递归结构。
- 小参数递归模型在ARC Prize等符号推理任务上显著优于同等规模非递归模型，验证‘计算深度’比‘参数规模’更关键。

## Content

Title: Y Combinator on X: "A 7-million parameter model outperforming models a thousand times its size on tasks like ARC Prize. That's what recursive reasoning unlocks.

In this episode of Decoded, YC's @agupta and @FrancoisChauba1 break down two recent papers on recursive AI models, HRMs and TRMs, that are https://t.co/slZh2sfHlE" / X

URL Source: http://x.com/ycombinator/status/2050224443461718118

Markdown Content:
A 7-million parameter model outperforming models a thousand times its size on tasks like ARC Prize. That's what recursive reasoning unlocks. In this episode of Decoded, YC's

and

break down two recent papers on recursive AI models, HRMs and TRMs, that are achieving state-of-the-art results with a fraction of the parameters of today's largest models. They explain why standard LLMs hit a fundamental ceiling on certain reasoning tasks, how recursion at inference time gives small models the compute depth to break through it, and what happens when you combine these ideas with the power of large-scale foundation models. [00:35](https://x.com/ycombinator/status/2050224443461718118?t=35) - Model Foundations [01:15](https://x.com/ycombinator/status/2050224443461718118?t=75) - RNN Limits and LLM Contrast [02:36](https://x.com/ycombinator/status/2050224443461718118?t=156) - Reasoning Limits and Sorting Analogy [04:22](https://x.com/ycombinator/status/2050224443461718118?t=262) - HRM Paper Introduction [05:25](https://x.com/ycombinator/status/2050224443461718118?t=325) - HRM Architecture and Intuition [07:36](https://x.com/ycombinator/status/2050224443461718118?t=456) - HRM Results and Outer Loop [09:46](https://x.com/ycombinator/status/2050224443461718118?t=586) - TRM Paper Overview [11:20](https://x.com/ycombinator/status/2050224443461718118?t=680) - TRM Training and Fixed Point [13:30](https://x.com/ycombinator/status/2050224443461718118?t=810) - Detailed HRM Summary [20:46](https://x.com/ycombinator/status/2050224443461718118?t=1246) - Comparing HRM and TRM [34:45](https://x.com/ycombinator/status/2050224443461718118?t=2085) - Future Outlook
