---
title: "Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights:\n\nThe first theme I tried to ..."
source_name: "Andrej Karpathy(@karpathy)"
original_url: "https://x.com/karpathy/status/2049903821095354523"
canonical_url: "https://www.traeai.com/articles/b97196e4-28ad-409a-98c3-5848b95d72be"
content_type: "tweet"
language: "中文"
score: 8.5
tags: ["LLMs","人工智能","Fireside Chat","Sequoia Ascent"]
published_at: "2026-04-30T17:28:50+00:00"
created_at: "2026-05-01T01:34:05.15149+00:00"
---

# Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights:

The first theme I tried to ...

Canonical URL: https://www.traeai.com/articles/b97196e4-28ad-409a-98c3-5848b95d72be
Original source: https://x.com/karpathy/status/2049903821095354523

## Summary

Karpathy在Sequoia Ascent 2026的炉边谈话中强调了LLMs超越加速现有技术的新领域，如无需代码的应用menugen、安装.md技能代替.sh脚本，以及LLM知识库处理非结构化数据的能力。

## Key Takeaways

- LLMs开启新应用领域，如menugen无需传统编码即可生成输出。
- 通过.md技能安装代替脚本，利用LLM智能适应安装环境。
- LLM知识库处理非结构化数据，实现之前不可能的功能。

## Content

Title: Andrej Karpathy on X: "Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights:

The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons:

1. menugen: an app that can be fully engulfed by" / X

URL Source: http://x.com/karpathy/status/2049903821095354523

Markdown Content:
## Post

## Conversation

Fireside chat at Sequoia Ascent 2026 from a ~week ago. Some highlights: The first theme I tried to push on is that LLMs are about a lot more than just speeding up what existed before (e.g. coding). Three examples of new horizons: 1. menugen: an app that can be fully engulfed by LLMs, with no classical code needed: input an image, output an image and an LLM can natively do the thing. 2. install .md skills instead of install .sh scripts. Why create a complex Software 1.0 bash script for e.g. installing a piece of software if you can write the installation out in words and say "just show this to your LLM". The LLM is an advanced interpreter of English and can intelligently target installation to your setup, debug everything inline, etc. 3. LLM knowledge bases as an example of something that was *impossible* with classical code because it's computation over unstructured data (knowledge) from arbitrary sources and in arbitrary formats, including simply text articles etc. I pushed on these because in every new paradigm change, the obvious things are always in the realm of speeding up or somehow improving what existed, but here we have examples of functionality that either suddenly perhaps shouldn't even exist (1,2), or was fundamentally not possible before (3). The second (ongoing) theme is trying to explain the pattern of jaggedness in LLMs. How it can be true that a single artifact will simultaneously 1) coherently refactor a 100,000-line code base *and* 2) tell you to walk to the car wash to wash your car. I previously wrote about the source of this as having to do with verifiability of a domain, here I expand on this as having to also do with economics because revenue/TAM dictates what the frontier labs choose to package into training data distributions during RL. You're either in the data distribution (on the rails of the RL circuits) and flying or you're off-roading in the jungle with a machete, in relative terms. Still not 100% satisfied with this, but it's an ongoing struggle to build an accurate model of LLM capabilities if you wish to practically take advantage of their power while avoiding their pitfalls, which brings me to... Last theme is the agent-native economy. The decomposition of products and services into sensors, actuators and logic (split up across all of 1.0/2.0/3.0 computing paradigms), how we can make information maximally legible to LLMs, some words on the quickly emerging agentic engineering and its skill set, related hiring practices, etc., possibly even hints/dreams of fully neural computing handling the vast majority of computation with some help from (classical) CPU coprocessors.

Quote

Stephanie Zhan

@stephzhan

Apr 29

@karpathy and I are back! At @sequoia AI Ascent 2026. And a lot has changed. Last year, he coined “vibe coding”. This year, he’s never felt more behind as a programmer. The big shift: vibe coding raised the floor. Agentic engineering raises the ceiling. We talk about what it

0:00 / 29:49

29:48
