---
title: "We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. \n..."
source_name: "Aravind Srinivas(@AravSrinivas)"
original_url: "https://x.com/AravSrinivas/status/2047019688920756504"
canonical_url: "https://www.traeai.com/articles/79a15f5b-3d81-4d37-8350-34410cbe62a8"
content_type: "tweet"
language: "中文"
score: 5
tags: []
published_at: "2026-04-22T18:28:20+00:00"
created_at: "2026-04-23T23:11:39.670915+00:00"
---

# We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. 
...

Canonical URL: https://www.traeai.com/articles/79a15f5b-3d81-4d37-8350-34410cbe62a8
Original source: https://x.com/AravSrinivas/status/2047019688920756504

## Summary

traeai 为开发者、研究员和内容团队筛选高质量 AI 技术内容，提供摘要、评分、趋势雷达与一键内容产出。

## Key Takeaways

- 
- 
- 

## Content

Title: Aravind Srinivas on X: "We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. 

Unlike our previous post trained models, this model has been trained to be good at search and tool calls simultaneously, allowing us to unify the tool call router and" / X

URL Source: http://x.com/AravSrinivas/status/2047019688920756504

Markdown Content:
## Conversation

We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. Unlike our previous post trained models, this model has been trained to be good at search and tool calls simultaneously, allowing us to unify the tool call router and summarization together in one model. The resulting model performs better than GPT and Sonnet in terms of cost efficiency to serve daily Perplexity queries in production. The production model runs on our own inference platform. We’re already serving a significant chunk of our daily traffic with this model and intend to have it serve all of default traffic pretty soon. More research to follow soon on models we’re training and deploying for Comet and Computer.

Quote

![Image 1: Square profile picture](https://pbs.twimg.com/profile_images/2009310641165660160/XArF3_Ib_mini.jpg)

Perplexity

@perplexity_ai

Apr 22

We've published new research on how we post-train models for accurate search-augmented answers. Our SFT + RL pipeline improves search, citation quality, instruction following, and efficiency. With Qwen models, we match or beat GPT models on factuality at a lower cost.

[![Image 2: Image](https://pbs.twimg.com/media/HGh2kXobcAAvVUX?format=jpg&name=small)](https://x.com/perplexity_ai/status/2047016400292839808/photo/1)
