---
title: "Speeding Up AI: Bringing Google Colossus to PyTorch via GCSFS and Rapid Bucket"
source_name: "Google Developers Blog"
original_url: "https://developers.googleblog.com/speeding-up-ai-bringing-google-colossus-to-pytorch-via-gcsfs-and-rapid-bucket/"
canonical_url: "https://www.traeai.com/articles/4d2d6483-fe40-4931-ac0d-0385a07688df"
content_type: "article"
language: "中文"
score: 9.2
tags: ["PyTorch","GCS","Colossus","fsspec","AI Infrastructure"]
published_at: null
created_at: "2026-04-30T02:48:34.840021+00:00"
---

# Speeding Up AI: Bringing Google Colossus to PyTorch via GCSFS and Rapid Bucket

Canonical URL: https://www.traeai.com/articles/4d2d6483-fe40-4931-ac0d-0385a07688df
Original source: https://developers.googleblog.com/speeding-up-ai-bringing-google-colossus-to-pytorch-via-gcsfs-and-rapid-bucket/

## Summary

Google 将底层 Colossus 文件系统能力通过 gRPC 双向流与 fsspec/gcsfs 集成，使 PyTorch 在 GCS Rapid Bucket 上实现 23% 训练加速，无需修改代码。

## Key Takeaways

- Rapid Bucket 通过 bi-di gRPC 替代 REST，将 Colossus 的低延迟（<1ms）和高吞吐（15+ TiB/s）带入 PyTorch 生态
- gcsfs 新增自动识别 Rapid Bucket 类型能力，PyTorch 用户仅需保持原有 fsspec.open() 调用即可透明获益
- Zonal co-location + direct connectivity + stateful streaming 三重优化消除跨区/连接/认证开销，GPU 利用率显著提升

## Content

Title: Speeding Up AI: Bringing Google Colossus to PyTorch via GCSFS and Rapid Bucket

URL Source: http://developers.googleblog.com/speeding-up-ai-bringing-google-colossus-to-pytorch-via-gcsfs-and-rapid-bucket/

Published Time: 2026-04-29

Markdown Content:
APRIL 29, 2026

Today, we are announcing a major performance boost for AI/ML workloads using the PyTorch ecosystem on Google Cloud. By integrating Rapid Storage, powered by [Google’s Colossus](https://cloud.google.com/blog/products/storage-data-transfer/a-peek-behind-colossus-googles-file-system?e=48754805) storage architecture, directly with**PyTorch** via the industry-standard `fsspec` interface, we are enabling researchers and developers to keep their GPUs busier than ever before.

## **The challenge: Keeping GPUs fed**

As model sizes grow, data loading and checkpointing often become the primary bottlenecks in training. Data preparation activities to train models involve fetching and processing terabytes and petabytes of data from remote storage mechanisms like object storage. Standard REST-based storage access can struggle to meet the extreme throughput and low-latency requirements of modern distributed training, wasting valuable GPU resources.

## **Rapid Bucket: Rapid Storage via bi-di gRPC**

Our new [**Rapid Bucket**](https://docs.cloud.google.com/storage/docs/rapid/rapid-bucket) solution provides high-performance object storage in dedicated zonal buckets. By bypassing legacy REST APIs and utilizing persistent gRPC bidirectional streams, we’ve brought the power of Colossus, filesystem stateful protocols that power YouTube and Google Search, directly to the PyTorch ecosystem.

### **Key performance metrics of Rapid Storage**

*   **Extreme Throughput:****15+ TiB/s** aggregate throughput.
*   **Ultra-Low Latency:**<1ms for random reads and append writes.
*   **High QPS:** Rapid Bucket provides 20M+ QPS.

## **Fsspec - PyTorch’s Pythonic file interface**

`fsspec` is the pervasive Pythonic interface for file systems in the PyTorch ecosystem. It is already used for:

*   **Data preparation:** Dask, Pandas, Hugging Face Datasets, Ray Data
*   **Checkpoints:** PyTorch Lightning, Torch.dist, Weights & Biases
*   **Inference:** vLLM

![Image 1: adk-java-1-0-release-1600x476](https://storage.googleapis.com/gweb-developer-goog-blog-assets/images/adk-java-1-0-release-1600x476_oH5cwtR.original.jpg)

There are various backend implementations of fsspec for many different storage systems, which can all be integrated under a single layer, eliminating the need to write specific code for each backend. By integrating Rapid Storage with `gcsfs` (the Google Cloud Storage implementation of fsspec), developers can leverage speed gains provided by Rapid with a simple `fsspec.open()` call — no complex code rewrites required.

## Under the hood: Leveraging Colossus

To achieve a performance boost with Rapid Buckets, we optimized the entire data path:

1.   **Stateful grpc-based streaming:** gRPC bi-directional streaming keeps the connection alive, minimizing per-operation overhead like connection setup, auth, metadata etc., and enabling efficient, stateful data exchange for multiple reads or appends within a single object.
2.   **Direct path:** Google Cloud Storage(GCS) Rapid Bucket uses [direct connectivity](https://docs.cloud.google.com/storage/docs/direct-connectivity) for its gRPC bi-directional streaming APIs (BidiReadObject, BidiWriteObject) to achieve maximum performance by connecting clients directly to underlying Colossus files. Non-Rapid traffic to GCS would typically have more network hops than direct paths, making read/write latencies over Rapid significantly lower. For more details, see [Rapid storage internal working.](https://cloud.google.com/blog/products/storage-data-transfer/how-the-colossus-stateful-protocol-benefits-rapid-storage?e=48754805)
3.   **Zonal co-location:** By placing storage in the same zone as your compute (e.g., `us-central1-a`), we eliminate cross-zone latency. Prior to Rapid buckets, data in a regional bucket and compute(accelerators) can be in different zones and access the data induced latency.
4.   **No-Op User Migration:** Preserved the existing `fsspec` API while entirely upgrading internal traffic from HTTP to BiDi-gRPC for Rapid buckets. By adding bucket-type auto-detection to gcsfs, PyTorch and other `fsspec` clients transparently utilize Rapid with zero manual configuration.

## Results

A dataset of 134M rows totaling around 451GB was loaded onto 16 GKE nodes, each containing eight A4 GPUs. Training was conducted in 100 steps, with a checkpoint after every 25 steps using PyTorch Lightning. We benchmarked the performance of total training time, including the data load times, and we observed a **performance gain of 23% using Rapid Bucket compared with Standard regional bucket.**

![Image 2: adk-java-1-0-release-1600x476](https://storage.googleapis.com/gweb-developer-goog-blog-assets/images/adk-java-1-0-release-1600x476_VmEOghU.original.jpg)

Microbenchmarking — that is, measuring the performance of a building block like I/O or resource usage — confirms these gains. Throughput improved by 4.8x for reads (both sequential and random) and 2.8x for writes. These tests used 16MB IO sizes across 48 processes. You can find more details at [GCSFS-performance-benchmarks](https://github.com/fsspec/gcsfs/blob/main/docs/source/rapid_storage_support.rst#performance-benchmarks).

## Get started

Getting started with [GCSFS](https://github.com/fsspec/gcsfs) on Rapid Bucket is easy. Your existing code and scripts remain the same. You just need to change the bucket to a [Rapid Bucket](https://docs.cloud.google.com/storage/docs/rapid/rapid-bucket) to take advantage of the performance boost.

**To install:**

Rapid Bucket integration is available from version [2026.3.0.](https://github.com/fsspec/gcsfs/releases/tag/2026.3.0)

`pip install gcsfs`
Python

Copied

**Code sample to read/write from GCS Rapid:**

```
import gcsfs

# Initialize the filesystem
fs = gcsfs.GCSFileSystem()

# Writing to a Rapid bucket
with fs.open('my-zonal-rapid-bucket/data/checkpoint.pt', 'wb') as f:
   f.write(b"model data...")

# Appending to an existing object (Native Rapid feature)
with fs.open('my-zonal-rapid-bucket/data/checkpoint.pt', 'ab') as f:
   f.write(b"appended data...")
```

Python

Copied

[](https://developers.googleblog.com/torchtpu-running-pytorch-natively-on-tpus-at-google-scale/)
Previous

Next

[](https://developers.googleblog.com/developers-guide-to-building-adk-agents-with-skills/)
