T
traeai
登录
返回首页
Jeff Dean(@JeffDean)

Google Translate is turning 20! 🎉. There are 20 fun facts and tips in the thread below. Translate ...

7.5Score
Google Translate is turning 20! 🎉. There are 20 fun facts and tips in the thread below.

Translate ...
AI 深度提炼
  • 2006年,Google翻译首次部署,使用了当时最大的5-gram语言模型。
  • 2016年,Google翻译从统计机器翻译转向基于深度神经网络的方法。
  • 最近,Google翻译通过Gemini模型进一步提升了翻译质量。

结构提纲

AI 替你读一遍后整理出的核心层级。

  1. Jeff Dean介绍Google翻译20周年,并分享了一些有趣事实和技巧。

  2. 2006年,Google翻译首次部署,使用了大规模的5-gram语言模型。

  3. 2016年,Google翻译转向基于深度神经网络的方法,使用了Sequence-to-Sequence模型和TPU。

  4. 最近,Google翻译通过Gemini模型进一步提升了翻译质量。

思维导图

用一张图看清主题之间的关系。

正在生成思维导图…
查看大纲文本(无障碍 / 无 JS 友好)
  • Google翻译20周年回顾

金句 / Highlights

值得收藏与分享的关键句。

  • 2006年,Google翻译首次部署,使用了当时最大的5-gram语言模型。

    第 3 段

    下载金句卡 PNG
  • 2016年,Google翻译从统计机器翻译转向基于深度神经网络的方法。

    第 5 段

    下载金句卡 PNG
  • 最近,Google翻译通过Gemini模型进一步提升了翻译质量。

    最后一段

    下载金句卡 PNG
#Google#机器翻译#深度学习
打开原文

Translate is one of my favorite Google products because it brings us all closer together!

I've been involved with a couple of things over the years. The first was our deployment of the" / X

Post

Conversation

Google Translate is turning 20! !Image 1: 🎉. There are 20 fun facts and tips in the thread below. Translate is one of my favorite Google products because it brings us all closer together! I've been involved with a couple of things over the years. The first was our deployment of the initial system in 2006, which provided a huge leap forward in quality because it used a much larger 5-gram language model trained on trillions of words of text (indeed, probably the first trillion token language model training in the world: paper has some nice heads showing scaling-law-like quality improvement from scaling to more data/compute). See "Large Language Models in Machine Translation", Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och and Jeffrey Dean, aclanthology.org/D07-1090/ The second major collaboration was in 2016 when we moved Translate over from a statistical machine translation approach to using deep neural networks. This approach relied on two key innovations. The first was Google's work on Sequence-to-Sequence models (arxiv.org/abs/1409.3215). The second was our development of TPUs, custom cups that improved the performance of inference for deep neural networks by 30-80X over existing CPUs and GPUs of the day (and reduced latency by 15-30X). This made launching compute-intensive language model services like Translate feasible for hundreds of millions of users. See "In-Datacenter Performance Analysis of a Tensor Processing Unit",Norman P. Jouppi et al.arxiv.org/abs/1704.04760 GNMT paper: "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean, arxiv.org/abs/1609.08144 Most recently, we have advanced Translate further using Gemini models. Each of these advances relied on research that have major quality leaps over the existing status quo translation approaches, bringing better quality and connectedness to all of our Translate users! !Image 2: 🎉

Quote

Image 3: Square profile picture

Google

@Google

Apr 28

![Image 4: A collage featuring the text "20 years of Google Translate" alongside people using phones and a birthday cake. Illustrations show global communication in various languages near landmarks like the Eiffel Tower.](https://x.com/Google/status/2049194440951013600/photo/1)

read image description

问问这篇内容

回答仅基于本篇材料
    0 / 500

    Skill 包

    领域模板,一键产出结构化笔记
    • 投融资雷达包

      把一条融资 / 创投新闻整理成投资人视角的雷达卡:交易要点、判断、竞争格局、风险、尽调清单。

      • · 交易要点(公司 / 轮次 / 金额 / 投资人 / 估值,材料未明示则写 “未披露”)
      • · 投资 thesis(这家公司为什么值得关注)
      • · 竞争格局与替代方案

    导出到第二大脑

    支持 Notion / Obsidian / Readwise
    下载 Markdown(Obsidian 直接拖入)