Wednesday, May 13, 2026
Home » Scality ADI: Why enterprise AI needs a new data infrastructure operating model

Scality ADI: Why enterprise AI needs a new data infrastructure operating model

Today we announced Scality ADI (Autonomous Data Infrastructure), and I want to use this post to go beyond the press release and explain what we built and why.

Here is the short version: AI, cyber resilience, and sovereignty are converging at the data layer, and the storage architectures most enterprises rely on were not designed for that convergence. 

Scality ADI is our answer to that challenge. It provides a new operating model that autonomously aligns the right storage media, performance, and protection to each workload — from GPU-speed flash to deep archive — all on a disaggregated architecture, under one platform and one namespace.

The phrase “new operating model” deserves some unpacking. So let me walk through the problem as we see it, what Scality ADI is in concrete technical terms, and why we think it matters for infrastructure teams managing data at multi-petabyte to exabyte scale to have a new operating model.

AI is not one workload

The storage conversation has shifted significantly. A few years ago, AI meant model training, which required large sequential IO to read massive datasets. That was a throughput problem (and a well-understood one). Today the picture is fundamentally different.

Training is still there, but it now coexists with inference, retrieval-augmented generation (RAG), video search, distributed inference, KV cache, multimodal pipelines, and long-term data retention for governance and auditability. Each of these workloads places different demands on storage throughput, latency, concurrency, protection, and cost. 

As mentioned above, a model training run needs extreme sequential bandwidth. A RAG application needs low-latency random reads against small data objects. A KV cache needs sub-millisecond access on a hot working set. A regulatory archive needs immutable retention at the lowest possible cost and power draw.

Solving all of that with an all-flash storage system alone or solving each one with a separate platform (which happens all the time) is where most enterprises get stuck. And that is the gap Scality ADI is designed to close.

The silo problem is getting worse, not better

Most large enterprises have accumulated their storage infrastructure the way cities accumulate roads: one project at a time, each solving an immediate need, each leaving behind an operational footprint that somebody must maintain.

The result is a growing collection of platforms: one for high-performance workloads, another for general-purpose file and object, another for backup targets, another for archive. Each has its own management interface, upgrade cycle, protection policy, cost model, and operational team. When data needs to move between them, which is an ongoing requirement in an AI-driven enterprise, the movement itself becomes a whole new project.

AI makes this worse because data does not stay in one temperature state. A dataset may begin as active training data on high-performance flash, become a governed reference set on capacity-optimized storage, later feed a RAG or inference pipeline, and eventually move into long-term retention on tape or cold cloud. If each phase lives on a different platform, you are not managing a data lifecycle. You are managing a data migration backlog that will become a costly burden.

And you are doing it with the same team. Storage headcount is not growing at the rate data is. Infrastructure teams are being asked to scale capacity, scale performance, strengthen protection, and demonstrate compliance, but without scaling the people who keep it all running.

Why we built something different

Enterprise buyers are being asked to deliver AI performance, cyber resilience, sovereignty, and efficiency at a scale the old storage model was never built to handle. We have heard a version of this from customers consistently over the last two years: the architecture itself is breaking, not because their infrastructure is old, but because the demands on it have shifted in ways no one originally designed for. 

The scale we operate at is not theoretical. Scality manages over 12 exabytes of customer data in production today, with our largest deployments at exabyte-scale, with failure domains (single RINGs) of 250PB usable capacity. Over six trillion objects sit on Scality infrastructure across customers in 70 countries, including 10 of the top 20 global telcos and CSPs, 7 of the 15 largest banks, and government agencies running hundreds of petabytes of sensitive workloads. Our customer satisfaction, as measured Net Promoter Score (NPS) of 85 in 2025, is industry-leading.

That position gave us a choice. We could have responded the way most of our industry has, with another point storage product: a faster one, a cheaper one, a more secure one. We chose differently. 

What enterprises need is not simply a better storage product. They need a new operating model for data infrastructure. One that aligns the right performance, protection, and economics to each stage of the data lifecycle. One that brings AI performance, cyber resilience, and sovereign control into a single story rather than three separate platforms. One that scales without forcing customers through another disruptive refresh cycle.

That is the platform that we built.

Introducing Scality ADI

That platform is Scality ADI. A single, integrated software appliance built on Scality’s proven RING foundation, designed to deliver this operating model across the full data lifecycle, from GPU-speed flash to deep archive.

What Scality ADI actually is

Let me be precise about the architecture, because Scality ADI is not a rebrand.

  • Scality ADI is the platform layer that adds what Scality RING and ScalityOS alone do not provide: AI-assisted, autonomous operations and intelligence through Scality Guardian, AI-extensible operations through MCP-enabled workflows, cross-temperature media flexibility from NVMe flash to tape and cloud, policy-driven lifecycle management, and outcome-based service commitments.
  • Scality Guardian is the operational intelligence engine inside Scality ADI. Scality Guardian observes system state and surfaces workload-aligned insights across predictive maintenance, platform health, power consumption, and cyberthreat detection. It’s trained on Scality’s own operational cases, with recommendations grounded in real-world infrastructure patterns, not generic AI responses. Scality Guardian is read-only by design: it informs decisions, it does not execute them.
  • Scality RING is our distributed object storage engine based on a disaggregated architecture that goes back to its inception. It has been in production for well over a decade at multi-petabyte to exabyte scale across some of the most demanding environments in the world. Scality RING is fully proven, and it remains the core of what we do.
  • ScalityOS is the hardened, standardized runtime that delivers a consistent appliance-like operational experience across nodes, sites, and software versions. It is what makes the solution deployable and manageable as a production platform rather than a collection of components.
  • Outcome-based customer experience: Scality ADI is offered through a new outcome-based commercial model, with SLA commitments aligned to the results we believe our customers care about most, including availability, throughput, protection posture, and service guarantees. Our customer service and support team is dedicated to a relentless pursuit of customer satisfaction. Our premium Scale Care Services provide an extra level of personalized support for enterprises with mission-critical workloads.

Using an automotive analogy, think of Scality RING as the engine, ScalityOS as the chassis, Scality Guardian as the dashboard and electronics, and Scality ADI as the complete vehicle — delivered through a customer experience aligned to business outcomes. 

Existing Scality RING customers carry forward their data, their investment, and their operational history. What changes is what the platform can do around them.

One namespace from GPU-speed data to deep archive

This is one of the most important things ADI delivers, and it is worth explaining clearly.

Scality ADI spans multiple storage media classes under a single operational model and a single namespace.

In practice, that means NVMe SSD for GPU-direct workloads at sub-5-microsecond latency, QLC and nearline flash for high-throughput (TB/s) data preparation, HDD for capacity-optimized workloads like RAG and AI data lakes, and tape or cold cloud storage for long-term archive and retention.

Policy-driven lifecycle management determines where data sits and when it moves. 

The policies are defined by the customer, not by opaque automation. These policies reflect the actual requirements of each workload: performance needs, protection posture, cost targets, power profile. 

So, to be clear: Scality ADI is not a full self-driving vehicle (to extend the analogy above), but instead provides an explainable, auditable operating model with autonomous execution guided by customer-defined policies.

The goal is not to run everything on the fastest media. The goal is to place data where it delivers the right combination of performance, protection, economics, and energy efficiency for the workload it serves. 

For AI data in particular, that placement changes over time. A dataset that requires extreme throughput during training does not need the same storage profile six months later when it is serving as a reference corpus for a RAG pipeline or sitting in governed retention. Scality ADI manages that lifecycle without forcing teams to stitch together separate platforms or move data manually between silos.

Autonomous operations with humans in control

Autonomous infrastructure is a phrase the industry has used loosely, and it is worth saying plainly what we mean by it.

In Scality ADI, autonomous means operational intelligence with bounded, policy-governed execution. Not a black box. Not self-driving. 

The architecture has three separate parts. Scality Guardian is the AI-assisted intelligence layer. It observes system state and surfaces workload-aligned insights across predictive maintenance, platform health, power consumption, and cyberthreat detection. Trained on Scality’s own operational cases, recommendations reflect real infrastructure patterns, not generic AI. By design, Guardian is read-only. It informs decisions, it does not execute them.

Agentic operability is where action happens. Through MCP (Model Context Protocol), customers connect their own AI tools into Scality ADI’s operational workflows. That AI can trigger account creation, respond to alerts, manage quotas, or execute any task an operator could perform in the UI, governed by the customer’s own policies.

The boundary holds it together. Whether a human or the customer’s AI is driving, every action executes within auditable policy bounds. Nothing happens without authorization.

For sovereign, regulated, or mission-critical infrastructure, that separation between insight, action, and governance is not a nuance. It is the difference between a platform they can trust and one they cannot deploy.

Keeping GPUs productive requires a different data path

There is growing recognition that AI performance is not only a GPU problem. 

When GPUs are idle because data is not arriving fast enough, the bottleneck is in the storage and network path, not in the compute tier. As AI workloads evolve from batch training to real-time inference and agentic workflows, that data path becomes increasingly critical.

Scality ADI addresses this issue directly. The GPU-Direct tier delivers S3 over RDMA with sub-5-microsecond latency, designed for workloads like training, distributed inference, and KV cache where every microsecond of data access delay translates into underutilized GPU cycles. At the same time, the platform’s multi-terabyte-per-second aggregate throughput supports the large-scale data movement that training and data preparation pipelines demand.

What makes this architecturally different from a purpose-built parallel file system or an all-flash appliance is that Scality ADI does not force you to choose between AI performance and everything else.

The same platform that serves your hottest GPU workloads also manages your warm data lakes, your governed archives, and your long-term retention all under one set of policies, one operational model, and one protection framework.

Protection has to be provable

Cyber resilience has moved from a backup-team concern to a board-level question. Insurers want to know whether protection is provable. Regulators want to see audit trails. And in the AI era, the same infrastructure that feeds training pipelines and inference workloads also holds the data that must remain immutable, recoverable, and auditable under the most hostile conditions.

Scality ADI incorporates CORE5 cyber resilience principles at every tier of the platform as an intrinsic architectural property. Object-level immutability, MFA authentication, access control, encryption over the wire and at-rest, distributed erasure coding, metadata protection, multi-site replication, and a hardened operating system underneath. The protection model applies whether data sits on flash, HDD, or tape, and whether it is actively serving an AI workload or sitting in long-term governed retention.

Sovereignty fits here, too. For organizations that must maintain control over where data resides, who can access it, and how the infrastructure itself operates, Scality ADI’s software-defined, on-premises deployment model and open-code “inspectability” provide the transparency that sovereign and regulated environments require. Human-in-the-loop autonomy reinforces this: the customer controls the policies, approves the actions, and retains the audit trail.

Power is now an infrastructure design constraint

This is a newer part of the enterprise data infrastructure conversation, and it is becoming urgent. Power availability is a hard limit on how much AI capacity an organization can deploy. Cooling budgets are finite. And the energy cost of storing and moving data is no longer something infrastructure teams can treat as an externality.

Scality ADI includes real-time power telemetry at the system, node, and workload levels. That means infrastructure teams can see the actual power consumption associated with their data placement decisions and can use that visibility to make informed tradeoffs between performance, capacity, and energy efficiency.

Cross-temperature media placement makes this practical. Data that does not need to be on flash does not need to consume flash-level power. Moving cold data to tape or cloud archive is not just a cost optimization; it is an energy optimization. As data centers face tighter power envelopes, the ability to connect workload requirements to actual power constraints becomes a genuine operational advantage, not a sustainability branding exercise.

Outcomes, not just capacity

Enterprise infrastructure teams are not measured on how many petabytes they deploy. They are measured on whether applications perform, data is protected, systems stay available, and operations remain efficient within budget.

Scality ADI introduces outcome-based service commitments aligned to those operational realities. Instead of selling capacity and leaving the customer to figure out whether it meets their actual requirements, ADI ties service levels to the metrics that matter in production: availability, throughput, protection posture, power consumption, and operational efficiency.

For IT leaders, this shifts the vendor relationship from a hardware transaction to an infrastructure partnership. The platform’s accountability is aligned to the outcomes the customer’s organization actually needs to deliver.

Built for the next decade of enterprise data

The forces reshaping enterprise data infrastructure are not temporary. AI workloads will continue to diversify and grow. Cyber resilience expectations will continue to tighten. Sovereignty requirements will continue to expand. Power will remain a hard constraint. And operational teams will continue to be asked to do more with the same headcount.

Scality ADI is our answer to that reality — and not just as a faster storage product, but as a new operating model for aligning performance, protection, economics, and control across the full enterprise data lifecycle, at multi-petabyte to exabyte scale.

If that is the challenge your organization is facing, we built this for you.

Explore Scality ADI at www.scality.com/adi

Additional Scality ADI resources:

Scality ADI solution overview

1 min Scality ADI explainer video

Scality ADI press release