AWS scales to billions, trillions, quadrillions on Prime Day
July 7, 2021
Every once in a while, the folks at AWS share metrics that illustrate the sheer scale of AWS cloud databases and other infrastructure. On Prime Day, one of Amazon's busiest times of the year, the usage numbers tell a story of extreme scalability.
Prime Day is a misnomer. The event actually extended over 66 hours, beginning on June 21, so the following stats represent nearly 3 days of workload. But they're impressive no matter how you slice them.
Here are the data points shared by AWS evangelist Jeff Barr in a blog post.
- Amazon Fulfillment Technologies used 3,715 instances of a PostgreSQL-compatible version of Amazon's Aurora database to process 233 billion transactions, store 1,595 terabytes of data, and transfer 615 terabytes. (For comparison, Amazon used 1,900 Aurora instances to process 148 billion transactions in 2019.)
- Amazon's DynamoDB handled trillions of API calls, peaking at 89.2 million requests per second. DynamoDB, a NoSQL database, is used by Amazon's website, fulfillment centers, and Alexa virtual assistant.
- For data storage, AWS added 159 petabytes of EBS block storage in advance of Prime Day. Each day, the system managed 11.1 trillion requests and transferred 614 petabytes of data.
- Other stats: Amazon's CloudFront content delivery network handled more than 600 billion HTTP requests, while its Simple Queue Service peaked at 47.7 million messages per second.
I share these eye-opening metrics because two of the major advantages of cloud databases are scale and elasticity.
Also, I discussed database scalability in a recent Cloud Wars Live podcast.
The bottom line is that database scalability will become increasingly important to all companies, as their data estates grow over time from terabytes to petabytes to (some day) exabytes. There's much to be learned in how AWS does it.
John Foley, Editor
|