Getting to grips with SolidFire

We’ve had Nike MAGs, Pepsi Max and Hover boards now we look to the data centre of the future

I have been doing more and more with SolidFire over the last few months, and I’ve had somewhat of a revelation about it. Around this time last year, I thought there was too much overlap with the FAS wing of the portfolio for NetApp to be pursuing an acquisition. To the uninformed, this may look true on paper, but it is completely different in practice. The more I learn about SolidFire, the more I am impressed by the decisions NetApp has made and the direction they are heading.

Hopefully you are aware of all the great benefits of using a SolidFire cluster within your environment, but for those of you who aren’t, I’ll sum it up in one word—predictable. This predictability extends to all features of the architecture including capacity, performance, overall health and healing, and scalability.

An initial 4 node SolidFire deployment

Let’s have a look at performance first. Starting with four nodes, you have 200K IOPS available. By adding more nodes to this cluster, you can grow predictably at 50k per node*. And that’s not even the best part. The real showstopper is SolidFire’s ability to provide you with precisely the IOPS your workload requires by assigning a policy to each volume you create. If you undertake this task via the GUI, it’s a set of three boxes that sit in the bottom half of the creation wizard asking you what your minimum, maximum, and burst requirements for this volume are. These three little text boxes are unobtrusive and easy to overlook, but they have a huge impact on what happens within your environment. By setting the minimum field, you are effectively guaranteeing the quality of service that volume gets. Think about it, “guaranteed QOS,” (gQOS, if you like). That little g added to an acronym we have used for years is a small appendage with massive importance.

Volume Creation wizard

Most other vendors in the IT industry will say that the use of QOS is merely a Band-Aid — a reactive measure—until you can fix the issue that has caused a workload to be starved or bullied. This requires you to carry out some manual intervention, not to mention the repercussions of you letting things escalate to that point where they have already had a negative impact on the business.

We need to change from this reactive methodology. Let’s start by lifting the term “quality of service” out of its drab connotations, give it a coiffured beard, skinny jeans, and a double macchiato. Let’s add a “g” to this aging acronym and turn that hipster loose on the world. gQOS is the millennial in the workplace, delivering a twenty-first-century impact on the tasks and procedures that have been stuck in a rut for years. When you hear someone use QOS ask, “Don’t you mean gQOS?” Then walk away in disgust when they look at you blankly.

With SolidFire you are able to allocate performance independent of capacity in real-time without impacting other workloads. What does this mean you may ask? No more noisy neighbours influencing the rest of the system. gQOS addresses the issue of shared resources and allows you to provide fool-proof SLAs back to the business something sought by those Enterprise organisations looking to undergo a transformational change and Service Providers with hundreds of customers on a single shared platform.

gQOS in action

So let’s start positively promoting gQOS because if it’s not guaranteed can we really call it quality? If I was in the tagline-writing business, this area of the NetApp portfolio would read something like “SolidFire Predictability Guaranteed.”

*The SF19210 adds 100K per node.

Grays Sports Almanac image courtesy of Firebox.com