T
traeai
登录
返回首页
Aravind Srinivas(@AravSrinivas)

We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. ...

5.0Score
We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. 
...
AI 深度提炼

Unlike our previous post trained models, this model has been trained to be good at search and tool calls simultaneously, allowing us to unify the tool call router and" / X

Conversation

We’ve post trained a model on top of Qwen that achieves Pareto optimality on accuracy-cost curves. Unlike our previous post trained models, this model has been trained to be good at search and tool calls simultaneously, allowing us to unify the tool call router and summarization together in one model. The resulting model performs better than GPT and Sonnet in terms of cost efficiency to serve daily Perplexity queries in production. The production model runs on our own inference platform. We’re already serving a significant chunk of our daily traffic with this model and intend to have it serve all of default traffic pretty soon. More research to follow soon on models we’re training and deploying for Comet and Computer.

Quote

Image 1: Square profile picture

Perplexity

@perplexity_ai

Apr 22

We've published new research on how we post-train models for accurate search-augmented answers. Our SFT + RL pipeline improves search, citation quality, instruction following, and efficiency. With Qwen models, we match or beat GPT models on factuality at a lower cost.

![Image 2: Image](https://x.com/perplexity_ai/status/2047016400292839808/photo/1)