About AWS S3
Object storage at exabyte scale, built and run by AWS.
Amazon S3 is the object storage service that AWS launched in 2006 and has run continuously since. It holds objects inside buckets, addressed by a key, and the design target is simple: store any amount of data, retrieve it from anywhere on the internet, pay for what you use. AWS publishes a durability target of eleven nines (99.999999999%) and a default availability target of 99.99% on S3 Standard, with data replicated across multiple devices in multiple availability zones inside a region.
The service today carries hundreds of exabytes of customer data and handles more than 200 million requests per second on average, according to the AWS S3 product page. Around the core PUT and GET surface sit a stack of features that matter for analytics: storage classes that range from S3 Standard for hot data through Intelligent-Tiering, Standard-IA, Glacier Instant Retrieval, Glacier Flexible Retrieval and Glacier Deep Archive for colder tiers; Express One Zone for single-digit-millisecond latency; lifecycle rules to move files between classes automatically; versioning and Object Lock for recovery and WORM compliance; replication across regions and accounts; and IAM, bucket policies, Block Public Access and SSE encryption for governance. S3 Tables, the managed Apache Iceberg surface AWS added more recently, lets Athena, Redshift, EMR, Snowflake, Spark, Trino and DuckDB read the same lakehouse tables through the Iceberg REST Catalog without each engine writing its own copy.