About Google Cloud Storage
Object storage at exabyte scale, built and run by Google Cloud.
Google Cloud Storage is the object storage service that Google made generally available in 2010 as part of Google Cloud Platform. It holds objects inside buckets, addressed by a name, and the design target is straightforward: store any amount of unstructured data, reach it from anywhere, pay for what you use. Google publishes an annual durability target of eleven nines (99.999999999%) across every storage class, and an availability SLA that ranges from 99.95% on multi-region and dual-region buckets down to 99.9% on single-region buckets.
Around the core PUT and GET surface sit a stack of features that matter for analytics: four storage classes (Standard for hot data, Nearline with a 30-day minimum for monthly access, Coldline at 90 days for quarterly access, Archive at 365 days for yearly access or compliance retention); three location types (multi-region like the EU multi-region, dual-region pairs and single regions including europe-west1 in St. Ghislain Belgium and europe-west4 in the Netherlands); Object Lifecycle Management to move objects between classes or delete them automatically based on age, prefix, version count or custom-time metadata; soft delete on by default with a seven-day retention window; Object Versioning, retention policies and Bucket Lock for WORM compliance; VPC Service Controls, IAM, customer-managed encryption keys and uniform bucket-level access for governance. BigQuery reads GCS buckets directly through external and BigLake tables, with BigLake adding access delegation and metadata caching so the warehouse layer does not need separate bucket permissions, and BigQuery Omni extends the same query surface across data that still lives in AWS S3 or Azure Blob.