Monday, December 9, 2024
Home » Six exabytes in production, powered by multidimensional scalability

Six exabytes in production, powered by multidimensional scalability

Scality RING now manages a groundbreaking six exabytes of customer storage capacity, equivalent to 6,000 petabytes. This achievement represents the usable capacity provided to our customers, which is distinct from the much larger raw disk capacity.

To determine the amount of raw capacity this represents, we need the sum of shipped disk capacities on storage servers managed by RING, aggregated across our hardware partners including HPE, Cisco, SuperMicro, Lenovo, Dell, Gigabyte and other popular vendors in countries and regions such as Japan and Southeast Asia. Given RING’s erasure coding (EC) and replication data protection mechanisms, this will be on average 1.5 to 2 times the amount of usable capacity, so that means an estimated 10 exabytes of raw storage capacity on the servers managed by RING.

Putting six exabytes (EB) of usable capacity in context:

Scality RING use cases and client examples

For years, RING has been deployed as the storage infrastructure for large cloud service providers and enterprise private cloud environments, supporting some truly massive S3 object and file system workloads. 

A few examples include: 

  • A U.S. service provider cloud: A single RING currently stores 300 billion objects and is growing. This massive-scale RING is also deployed across three data centers in a  “geo-stretched” model to provide site failure tolerance for ultra-high availability.
  • European bank private cloud: A single RING serves over 1,200 concurrent applications within this enterprise cloud environment. This number of applications hitting the system creates a highly unpredictable and demanding mixed workload that only a super flexible storage architecture can manage.
  • A travel services provider stores one petabyte of new data per day in RING, while simultaneously being accessed by analytics workloads.
  • A U.S. bank’s 100-plus petabyte data lake: RING enables concurrent ingest of hundreds of terabytes of new data on each of two data centers, while being accessed by machine learning and analytics applications and while asynchronously replicating data across the two sites. This enables an active/active disaster recovery solution.
  • A large media publisher in Asia manages nearly 30 petabytes of videos on RING’s scale-out file system (SOFS) for streaming access by hundreds of thousands of online subscribers. To support this level of concurrency, access is provided through over 80 file system endpoints on a common pool of data.

Understanding RING’s special sauce: Multidimensional scalability in a single system

What fundamentally makes this possible in the RING is a distributed, scale-out storage system architecture. Scale-out storage systems are available in many forms and varieties and, obviously, almost all vendors claim the ability to scale storage capacity. 

To make our solution flexible and enable our customers to deploy solutions such as those described above, Scality RING takes scalability to new levels by enabling nine dimensions of scale-out in a single system:

  1. Scalability of capacity – from hundreds of terabytes to exabytes, accomplished by adding disks, servers, racks, data centers, while remaining 100% online and available.
  2. Scalability of metadata (i.e. the namespace catalog and index) – independent of capacity.
  3. Scalability of S3 objects – the number of objects per S3 bucket can grow to hundreds of billions.
  4. Scalability of S3 buckets – the number of S3 buckets can grow from one to hundreds of millions.
  5. Scalability of authentication requests – from thousands to millions of S3 authenticated transactions per second.
  6. Latency at scale – maintain consistent response time latencies at massive scale.
  7. Scalability of throughput – from gigabytes per second to terabytes per second for streaming data in and out.
  8. Independent scalability of storage compute and capacity – through a decoupled (disaggregated) architecture, the system can scale S3 storage compute performance resources independent from capacity as needed for the workload.
  9. Scalability of logging, metrics reporting and monitoring – in a large deployment, RING can manage logs that may reach petabyte scale over time and generate massive volumes of monitoring metrics and key performance indicators.

It takes many years of working closely with customers to understand the many facets of scaling that their workloads require. While some of this is predictable ahead of time, many demands on data storage can only be understood at the time of deployment.

For example, some new cloud-native applications will create a billion objects in a single bucket; or alternatively, create a bucket per user for a cloud service designed to serve millions of users. Neither approach fits into known best practices for application and storage interactions, and yet they occur routinely. In a multi-tenant cloud, delivering data storage for hundreds to thousands of simultaneous workloads will result in system resource spikes for fundamental tasks, such as user authentication requests, maintaining utilization metrics for user accounts, tracking and sending health status alerts and key performance indicators for systems monitoring. 

Exabyte-scale deployments: Setting the trend for next-generation AI data lakes

We also now see a growing frequency of single-system, exabyte-scale deployments for large cloud services and AI data lakes. We believe RING is the one storage system that can handle this level of complexity and scale. This is obviously attributable to many aspects of the architecture including:

  • Super-efficient disk usage from the use of disk-based containers to avoid fragmentation due to small files, allowing RING to get close to full and still operate much more effectively than other systems.
  • Aggressive use of flash for both metadata and internal system indexes to effectively shield the disk drives from a high percentage of IO operations.
  • Ability to adapt data protection mechanisms where they are most efficient — multi-copy replication for small files and erasure coding for large files, with a tunable threshold size.
  • Variable parity levels in erasure coding to optimize for data size distributions and deployment topologies (single site and multi-site). For example, RING can be deployed across three data centers in a synchronous “geo-stretched” model to provide full data center failure tolerance, but at much less than 2x space overhead. Many other systems would require three full copies of data — one per site for this level of resiliency and high availability.

All of this and more has been refined over years of working with the world’s largest enterprises, cloud service providers and government agencies to create storage infrastructure based on Scality RING to solve their truly massive-scale data challenges. 

Want to learn more about how Scality RING can transform your data management? Contact us today.

Tags:

About Us

Solved is a digital magazine exploring the latest innovations in Cloud Data Management and other topics related to Scality.

Editors' Picks

Newsletter

Challenges solved, insights delivered, straight to your inbox.

Receive hand-picked articles, case studies, and expert opinions. Keep up with industry innovations and get actionable insights to optimize your strategy.

All Right Reserved. Designed by Scality.com