126 Let me let you in on a little secret. Data can really be in two places at once. Or three. Or even more. And you can do this without specialized WAN acceleration technologies or overlay/virtualization technologies on top of your storage. How is this possible? First, you need software that can treat distributed hardware and bits as one logical entity. Second, you need to present those bits to customers in protocols they (or their computers) can understand like NFS or SMB. Third, you need to protect all the bits and tolerate issues like site failures. Finally, you need to be able to tolerate the moderate latencies (10-50ms) of metro networks. Interested? Sound like the holy grail? Well, we’re not the first to talk about such a solution. The problem has been that previous attempts have done too little, or even worse, tried to solve too much: like trying to support remote non-linear editing directly from a centralized or virtualized storage environment. Such attempts have been overloaded with expectations and requirements, and have been buried under their own weight. We call our approach a stretched RING. This is a Scality RING that has been deployed across two or more sites that behaves as one logical entity. Multiple clients from multiple sites can access the same data. Multiple clients can read the same files simultaneously in parallel. This enables key workflows in industries like Media and Entertainment (we call this use case Active Archive broadly). For example, an artist in one location can work on a video and then save it to the RING via SMB. Moments later, an editor in a second location can access the video via NFS, copy it to their local workstation and continue the process. Later in the workflow, the same repository can even be used to serve edge servers via HTTP. Behind the scenes, we store the data, slice and protect it, and distribute it across the system. This “global nearline” approach eliminates the classic (and risky) arrangement of carrying around and then shipping tapes and portable drives. It also reduces the need for point-to-point transfer technologies that ultimately add cost to storage systems and still require manual oversight. For industries that are constantly creating, editing, and moving large files, this will save man-weeks of time, and help meet the ever-tightening production windows required for everything from M&E to Life Sciences. With an awareness of previous attempts, our focus is on the things we do very well, rather than trying to solve the entire production storage challenge. As mentioned, we support many readers to the same directory and file. We can stripe data across many spindles, scale out to many connectors, and support hundreds of thousands of connections – a boon to content distributors, and useful in the post-production workflow. We are smart enough to support a byte range change to an object and just upload that change, as opposed to downloading and then re-uploading the entire object, saving transfer time on large files. On the downside, we currently strictly serialize multiple writers across multiple connectors. But this behavior doesn’t impact the “follow the sun” scenario, and will be relaxed with some exciting upcoming releases. The bottom line is that we enable a critical shared storage workflow, and cap the size of the local SAN, but we also recognize the continued need for the low latency experience that local storage provides. One solution can’t do it all – but it also doesn’t have to! You too can have the same data in two, three, or more places at once. You can speed up workflows that you’ve suffered through for years, improve storage reliability, and improve the service you’re providing to your customers. And the solution exists and works today, as Scality customers like RTL II can attest to. It’s not a panacea, but it is a bit like magic. 😉 Photo by pawel szvmanski on Unsplash