Introducing TRIBE v2: A Predictive Foundation Model Trained to Understand How the Human Brain Processes Complex Stimuli
- TRIBE v2基于700+志愿者fMRI数据训练,分辨率比同类模型高70倍
- 可零样本预测新个体、语言和任务下的脑活动,无需额外实验
- 开源模型、代码及交互demo,推动神经科学与AI交叉研究
Introducing TRIBE v2: A Predictive Foundation Model Trained to Understand How the Human Brain Processes Complex Stimuli
[](http://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/# "Go up one level")
- Try Meta AI
- [](http://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/# "Toggle site search")
[](http://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/# "Close submenu")[](http://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/# "Main menu")
[BACK](http://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/# "Go up one level")
Clear
Research
Introducing TRIBE v2: A Predictive Foundation Model Trained to Understand How the Human Brain Processes Complex Stimuli
March 26, 2026•
2 minute read
Takeaways
- We're introducing TRIBE v2**,** our next-gen model that acts as a digital twin of human neural activity. This offers unprecedented speed, accuracy, and a 70x resolution increase as compared to similar models to predict how the brain responds to almost any sight or sound — enabling neuroscientists and clinical researchers to test theories without requiring human subjects.
- We're releasing the model, codebase, paper, and an interactive demo to help researchers push the boundaries of neuroscience, apply brain insights to build better AI systems, and use computational simulation to accelerate breakthroughs in the treatment of neurological disorders.
Understanding how the human brain processes the world around us is one of the greatest open challenges in neuroscience. Breakthroughs here could transform how we understand and treat neurological conditions affecting hundreds of millions of people — and improve AI systems by directly guiding their development from neuroscientific principles.
Today, we're announcing TRIBE v2: our first AI model of human brain responses to sights, sounds, and language. Building on our Algonauts 2025 award-winning model, which was trained on the low-resolution fMRI recordings of four individuals, we leverage a massive dataset of more than 700 healthy volunteers who were presented with a wide variety of media, including images, podcasts, videos, and text. TRIBE v2 reliably predicts high-resolution fMRI brain activity — enabling zero-shot predictions for new subjects, languages, and tasks — and consistently outperforms standard modeling approaches. By creating a digital model of the human brain, researchers can rapidly test hypotheses about its underlying functions without the need for human subjects in every experiment.
To accelerate the pace of neuroscience discovery and open up new avenues for clinical practice, we’re sharing a research paper, along with model weights and code, under a CC BY-NC license. We also invite everyone to explore TRIBE v2 on our demo website. By sharing this work, we hope to help accelerate neuroscience research that will unlock scientific and clinical breakthroughs for the greater good.
Explore the DemoRead the PaperDownload the CodeDownload the Model
Foundational models




Our approach
Our approachAbout AI at MetaPeopleCareers
Research
ResearchInfrastructureResourcesDemos
Meta AI
Meta AIExplore Meta AIGet Meta AIAI Studio
Latest news
Foundational models




Meta © 2026



