Reflecting on VMworld EMEA 2017

Back in September I found myself on a Sunday morning flight from London to Barcelona to explore one of the largest technical conferences held this side of the pond, VMworld Europe. The conference kicked off on Monday with a partner day and a general session where Pat Gelsinger @PGelsinger CEO said two things that really stood out for me. Number one, he thanked the audience for their continued support of VMware products and requested that we “go all in” with the ever-increasing portfolio. Number two he said, “Today is the slowest day of technical innovation of the rest of your life” and boy is he not wrong.

Now I have been working with VMware for well over a decade playing with ESX and GSX and picked up my fist VCP certification on version 2.5. But focusing on the storage industry and products in that space my VMware focus had waned, and whilst I had heard mention about new products and features that VMware had developed, until I got to this conference I didn’t realise how vast and how varied this portfolio had grown. To hear Pat and co band around the slogan if you will of “Any application on any device on any cloud” you get to see how much of a reality this is.

This message really drove home during day two’s keynote when Purnima Padmanabhan @PPadmanabhan VP Product Management, Cloud Management Business Unit and Chris Wolf @cswolf VP and CTO Global Field and Industry used the case of fictional pizza company, Elastic Sky Pizza, to show how this company that hadn’t adapted to the changes in the market place were now circling the drain. At the point that we enter the story a new CTO is literally just been appointed to the role and it’s their priority to turn the failing new website, app and ordering system around and make sure that it is delivered on time and on budget.

This section of the keynote carries on for nearly an hour, and while it does feel a tad long in places, it is really interesting to see how the many different businesses within VMware had developed products that interact with each other to deliver a common goal.

AppDefence is one product that stood out. This is a serious piece of kit that has the intelligence to understand the intrinsic way applications and the flow of data should work; and reports if anything deviates outside its allowed parameters. This is a huge leap in proactive application security allowing both the developers and security teams to work hand in hand to deploy robust applications. I feel that in time this will become part of a standard VMware environment, as I believe what AppDefence does in allowing you to understand what exactly is going on within your environment is the missing feedback loop you need to truly deliver an SDDC.

We also got a look at VMware cloud on AWS using elastic DRS and also HCX. Now these
solutions look cool and I can’t wait to try them out when I have some free time. Another great highlight was Pivotal Container Services (PKS) which lets you run an enterprise grade Kubernetes deployment on your site allowing for controlled project deployment for your DevOps teams. Not only was it easy to deploy it works hand in hand with NSX to build in security from day one.

With all of these technologies you get a sense of what VMware are trying to achieve with their portfolio, the ability to bridge the Hybrid cloud, and you can see the direction the company is heading in over the next twelve months. I recommend that if you have a chance, take a look at this recording on YouTube to truly appreciate the interaction and productivity you can achieve with the VMware ecosystem. But before you do buckle your seat belt as things move pretty fast.

If you would like to know a bit more about this and some more of what happened in Barcelona then please have a listen to the episode Reflecting on VMworld 2017 of Arrow bandwidth with both myself and Vince Payne.

Advertisements

A Tale of Two Cities

This year, I was fortunate enough to attend both NetApp Insight events, thanks to @SamMoulton and the @NetAppATeam; and whilst there are normally differences between the conferences it’s fair to say that this year they were polar opposites. Whilst the events team try extremely hard to make sure there is minimal difference between the two, this year there were things outside their control. Yes, there were pretty much the same topics and breakout sessions with the same speakers, and there was an even representation of sponsors and technology partners at both events, yet things were different.

Without going into too much detail, on Sunday 1st of October, the worst case of domestic terror within the US happened to coincide with the arrival date, location of NetApp Insight Last Vegas, and the hotel where majority of attendees were staying. This changed the overall mood and perception of the event. Las Vegas turned into a more sombre affair with more a perfunctory feel that was only right given the events that occurred. Yet, NetApp dug deep, showing skill, resolve and sincerity, and they still delivered an excellent event to be proud of. I would also like to give a huge thank you to the LVPD, first responders, EMT and anyone else associated in helping those who were caught up in this tragedy. During this tragic turn of events, true human nature and kindness shone through.


The NetApp Insight event that occurred two weeks ago in Berlin was a completely different beast. It was our fourth year at the Messe Berlin & City Cube, and like any recurring venue, it started to feel more familiar—like an old friend that we hadn’t spent much time with recently. Some in the partner community objected to the event being held in Berlin for the fourth year in a row. From my perspective, the high quality of content delivered in breakout sessions during the conference is the main draw for delegates, and whilst it’s nice to visit a new city, you have to feel for those who work in the Americas (just a little bit), where pretty much every conference is held in Las Vegas. (Veeam ON, which was held in New Orleans this year and Chicago next year, being the major exception.) After four years, I feel I’m only scratching the surface of what Berlin has to offer. I’ll probably miss not being there next year, but we are following the crowds to Barcelona and we will be there in December 2018.

Having attended many NetApp Insight events over the years, it’s fair to say that on Day 1 of Insight Berlin, there was a different, more positive, feel to the conference, one that has eluded it over the last five or so years. Those showing up were excited to what the four days would hold. The employees speaking or manning booths were eager to be meeting and discussing the advancements made in the past 12 months. No longer driving home the message about speeds and feeds but talking about the services and solutions that NetApp products bring to the table, and with over 250 services and solutions, that’s a massive number of ways to make a data fabric that’s fit for you. It was great to see partners from across EMEA wanting to learn more about the Next Generation Data Centre (NGDC) portfolio and understand how best to adapt it to their individual customer requirements. I also sat in on a few sessions to top up on those missed due to being cancelled in Las Vegas, and it’s fair to day that everyone’s heads (mine included) were more in the game.


Berlin started with a drum roll, quite literally, with the curtain raising act from the amazing Blue Devils Drum Line, all the way from California (check them out on YouTube). From @Henri_P_Richard, we heard a strong confident message about how to change the world with data. We learned how digital transformation is gaining momentum and empowering businesses as they move from survivors to thrivers in our data-centric era. Henri quite nicely backed this up by pointing out that NetApp are the fastest-growing of the top 5 total enterprise storage system vendors. It is also the fastest growing All Flash Array vendor, the fastest growing SAN vendor, and the world’s number 1 branded storage OS. But what really underlined these points was the reported earnings that came out Wednesday evening, which moved NetApp’s share price from $45 to $56 (as of writing), on the back of net revenues increasing 6% YOY, and raising their outlook for the rest of the fiscal year.


There were several stand out points made at Berlin. At the top of probably everyone’s list is the bold move that NetApp are making into the HCI space and the excellent untraditional tack they have taken to position this technology. Listening to NetApp’s detractors, I feel that there are a lot of established first-generation HCI vendors that fear what NetApp will bring to the table. (You only have to look at the AFA marketplace and how NetApp have delivered 58% global YOY growth.) As a distributor, we applied to get a demo unit to help promote this product, but due to the huge demand from customers, that box has had to slip down the priority delivery chain, which happens to show that despite the FUD being bandied about, customers out there really value the message and benefits that NetApp HCI is bringing.


One of my highlights came during the Day Two general session when Octavian Tanase and Jeff Baxter outlined the direction NetApp are heading in over the upcoming months. One of the many interesting technologies that they demonstrated during this section of the keynote was the Plexistor technology, which NetApp acquired for $32M in May 2017. With Plexistor, NetApp are able to not only increase the throughput over AFF by ten times, from 300K IOPS to 3M IOPS, but they also demonstrated that they can reduce the latency by seventy fold from 220µs to 3µs!! Now, that’s a performance improvement of two separate orders of magnitude.

This is, at the moment, a very niche technology, which will only benefit a very small number of the storage population to begin with; but it does illustrate that NetApp are not only delivering some of the most advanced endpoints on the globe today but also the boundaries they are pushing up against and stepping well past to stay at the forefront of data management. Working with Storage Class Memory and NVMe to deliver what will be the norm of next generation of storage appliances, NetApp are demonstrating that they have a clear understanding of what the technology is capable of whilst blazing a trail that others desire to follow.

For those of you how missed the event (shame on you), NetApp UK are holding a one day event on the 12 of December at their UK main office (Rivermead) for partners; and for those with a NetApp SSO, you can now access all the great content recorded during both insight events and download copies of the presentations to review at you leisure. When you have done all that, ask yourself, “How are you going to change the world with data?”

Time it’s on your side

We all seem short on time these days. We have conference calls and video chats to save us travel time when we can. We use TLAs (three letter acronyms) whenever possible. We are forever on the hunt for the next “life hack” or “time saver”.

NetApp Insight is getting closer, and if you’re planning on attending, hopefully you’ve already started mapping out your schedule, but if you haven’t then fear not. As an IT professional, your time is extremely valuable. Time is precious to myself and to my employer and you both want to get the most out of a day but with Insight 2017 stacked with so many great sessions this year, how can you choose?

Whilst everyone’s interests are different, I thought I’d give my pick for the sessions that I’m looking forward to at Insight Las Vegas. Whether you’re a first timer or an old guard Insight veteran, I hope this will help you be smart with your time or as the Stones put it, “time is on your side.”

13145-1 – Data Privacy: Addressing the New Challenges Facing Businesses in GDPR, Data Privacy and Sovereignty – Sheila FitzPatrick. GDPR is a critical challenge that affects companies all over the world, not just in Europe. I have heard Sheila FitzPatrick speak on this topic several times, and every time, I leave with some really useful info about how to help customers move towards legal compliance with the imminent deadline looming (May 25, 2018). This session will help you elevate the conversation around GDPR with details about how to help your business avoid those hefty fines.

16365-2 – First-Generation HCI versus NetApp HCI: Tradeoffs, Gaps and Pitfalls
Gabriel Chapman
.
HCI is definitely going to be the hot topic at this year’s Insight and with SeekingAlpha highlighting NetApp as one of the likely winners in this space. Here we have an opportunity to hear from Gabe who has spoken at Tech Field Days in the past with great passion
on the topic and has been working hard with the SolidFire team to craft this solution. This session will highlight the advantages of this solution over traditional HCI offerings and their limitations, as well as why it will appeal to those who see a benefit in next generation infrastructure.

16594-2 – Accelerate Unstructured Data with FlexGroups: The Next Evolution of Scale-Out NAS – Justin Parisi. For those of you who haven’t heard the Tech ONTAP podcast (what a shame!), this is a session presented by one of its hosts and will give you an idea of the great content it puts out. During the session, Justin Parisi looks at why FlexGroups are winning in the unstructured data space and how it improves upon the FlexVol. Just don’t ask him about SAN…

12708-2 – How NVMe and Storage-Class Memory Are Reshaping the Storage Industry – Jeff Baxter and Quinn Summers. These are two very knowledgeable presenters who deliver information rich content and I’m happy to see them giving a session together. This session looks at NVMe which NetApp is currently leading the field in capacity delivered to its customers and Storage-Class memory and how these technologies will affect data centre design and application deployments in the near future. For those wanting to keep at the forefront of technology advancements and were unable to get to the Flash Memory Summit this is the session for you.

16700-2 – FabricPool in the Real World: Configurations and Best Practices – John Lantz. FabricPool was one of the key features of the 9.2 payload and when it announced at last year’s insight general session was a mike drop moment. Now with the availability of the required ONTAP version this is an excellent way to hear how best to put that into practice and who better than John delve into the core part of this technology, design considerations and walk you through how to deploy one of the more fascinating parts of the data fabric.

18342-1 – BOF: Ask the A-Team – Next Generation Data Centre – Mark Carlton. I would be remiss if I didn’t call this out as a session of note (and yes, centre IS spelt with an R-E). This is a “birds of a feather,” session which means it’s more of an open conversation or Q&A rather than a lecture. Hosted by Mark Carlton with several members of the A-Team on hand to provide honest opinions, feedback, and tales from the field with the Next Generation Data Centre, you should leave this session with a greater understanding of how to make the move to NGDC.

18442-2 – Simplify Sizing, Deployment and Management of End-User Computing with NetApp HCIChris Gebhardt Another session covering this year’s H O T topic. In this breakout Chris will go in to what you need to know to have a successful deployment of NetApp’s First-generation Enterprise HCI offering. This is likely to be a popular session so make sure you book early.

17349-2 – Converged Systems Advisor: Simplify Operations with Cloud-Based Lifecycle Management for FlexPodWyatt Bennett and Keith Barto. Emerging from a recent acquisition by NetApp is this superb piece of software that allows you to graphically explore the configuration of a Flexpod against a CVD and make sure that you are correctly configured. If you have anything to do with Flexpod this is probably one of the more interesting developments in that area of the portfolio this year and at this session you can hear from 2 of the people who have been building this product for several years and gain a better understanding as to how it can benefit your deployments.

18509-2 – VMware Plugins Unify NetApp Plugins into a Single Appliance – Steven Cortez. With the recent update of the plugin for vSphere, here is your one stop for a good look at what has changed with Steven Cortez. Backup can seem like a beast of burden, but it needn’t be when you look at this offering and see what this new plugin can provide, whether that be over the old VSC dashboard, improvements to VASA integration and SRA functionality, or even VVol support. In this session, Steven will cover the more popular workflows within the unified plugin.

17930-3 – Virtual Volumes Deep Dive with NetApp SolidFireAndy Banta. Andy will be telling you why you want to flip the switch and move from traditional datastores to VVols and all the benefits and loveliness that comes with implementation a next generation VM deployment. Some of the conference attendees may feel they know ONTAP like the back of my hand but maybe this is the year to give SolidFire some serious focus and this is one session that will show you why.

26420-2 – Hybrid Cloud Case Studies – Scott Gelb. Come to this session to hear Scotty Gelb‘s top reasons for why you should embrace and implement a hybrid cloud strategy to the benefit of your company and customers. Based on customer experience, in this breakout, he will cover the considerations needed for a successful deployment and how to migrate your data to the cloud.

It’s also worth noting that whilst the sessions are the real meat on the bone for the conference (and you do get access to the content after the event), there’s lots more to do! The general sessions are always enlightening, and I look forward to what George Kurian will have to say. Then there’s the ability to give honest feedback directly to the PMs. Get your certs up to date (these have all been updated since Insight Berlin 2016) or spend some time in the hands-on labs. The Dev Ops café was also a hit last year. The list goes on and on.

The best advice I can give for attending is to do your homework and plan what you want to get out of the conference. Plan for lunch. Plan for some downtime during the day. Plan for a “working from home” day after the conference to get caught up, as you will no doubt be shattered. Maybe even plan to have a go at tumbling dice whilst in a casino. Plan for new friends and new faces, and most of all, plan to have a good time, because before you know it, you’ll be singing “It’s all over now.”


Setting Sail for Uncharted Waters

Today might be big day in NetApp’s history. Not only is it celebrating the companies 25th year, a 3 quarter in a row of revenue growth, 140% year on year growth in the All Flash Array (AFA) market segment or sitting in no. 2 position on revenue of AFA vendors (IDC). Nor is it just celebrating its SAN market share growing 3.6x faster than its nearest competitor, over 6.4PB of NVMe shipped or it’s SIX IT Brand Pulse awards for its Scale Out File Storage- FlexGroup. Its starting the week with a product announcement.

The 6 IT Brand Pulse awards

And whilst there may be cake and balloons at offices on East Java Drive and Kit Creek Road the company will be focused on moving forward. Today the company takes a step outside the Storage and Data Management field that it has dominated for two and a half decades, and into any area of the IT industry that has generated a lot of interest over the last couple of years that is still relatively new and unmapped the Hyper Converged Infrastructure market.

Now some of you may be saying that NetApp are quite late to the HCI game; and two, what can they possibly bring? Now remember NetApp were late off the blocks with an All Flash Array but look at the opening paragraph again to see just how well that’s now going and for what they can bring to the game then please read on.

Some of you may remember the version of EVO:rail that NetApp brought out a couple of years ago and feel they should stick to doing storage products; but the difference between that and todays launch is the fact that this time NetApp have solely led the development of this product, rather than having to follow a blueprint VMware put together for a wide and varied list of hardware vendors.

First generation HCI solutions were designed with the simplicity of deploying virtualisation technologies in mind, yet with this approach and a race to market they created limitations on performance, flexibility and consolidation. By claiming they can remove application silos by mixed workloads these limitations meant they ultimately failed at scale. These first-generation hardware offerings provided both compute and storage within the same chassis which meant that resources were tied and both requirements had to be scaled in parallel when either became low or exhausted.

NetApp approach the HCI arena and the limitations of current offerings with the Next Generation Data Centre at the core. The 4 key aspects that make up this HCI solution are: Guaranteed Performance, Flexibility and Scale, Automated infrastructure and the NetApp Data Fabric. It provides secure efficient future-proof freedom of choice.

Several of the benefits that people love about SolidFire is its ability to scale with ease; and growth is a key feature of this HCI infrastructure. With the ability to grow compute and storage independently and regardless of what your applications need, you can start small and scale online and on demand with a varying amount of configuration options to satisfy any enterprise environment. This in turn allows you to avoid overprovisioning say compute, whereby incurring unnecessary increased licensing costs, or storage whereby having excessive amounts of flash media present that would be associated with scaling traditional first generation HCI solutions.

Out of the box this solution utilises the NetApp Deployment Engine (NDE) to eliminate the majority of manual steps needed to correctly commission the infrastructure, combined with an intuitive vCenter plugin and also with a fully programmable interface to complement this scalable architecture to be truly a software defined HCI solution.

The all important front bezel

There will be a lot of interest in this enterprise-scale hyper converged infrastructure solution over the coming days and weeks I applaud NetApp for making the move into uncharted territory and I look forward to reading more about it ahead of its launch later in the year; as this solution combined with NetApp’s Data Fabric will honestly allow you to harness the power of the hybrid cloud.

Certification – More than just a tick in the box

Roughly six weeks ago I received an invitation to participate in the item development workshop to update the NetApp Certified Implementation Engineer Data Protection Specialist exam. The premise was to take the exam from its current state and bring it up to date as in the last 2 years a lot has changed within NetApp ONTAP and its associated protection methodologies. To get a good idea of how much has changed simply look at the ONTAP 9.1 release notes, even in the data protection section it talks about NVE, RAID-TEC, SnapLock just to mention a few, so it was a honour to be invited to help update the exam and something I was looking forward to.

Sign greeting my arrival at RTP building one

Just over a week ago was the time to undertake this workshop and on a lovely sunny Sunday morning I boarded a plane to head to NetApp’s Research Triangle Park (RTP), North Carolina USA campus where the workshop was to be held. The following morning at 9am sharp we started the week long activity of modernising the exam to the most recent GA release of ONTAP. The workshop was run by an independent company who specialise in writing exams for certification. Their job was to lead the workshop making sure we kept to the exam blueprint, kept it to the right level of difficulty and asked the question as directly as possible. One of the first things we covered was the difference between assessment and certification and for those of you who may be unaware the difference is probably two fold. For assessment all the information required to pass is contained with a structured course material (e.g. PowerPoint; pdf course notes) whilst a certification draws on many various pieces of information ranging from course notes to technical reports to documentation and even industrial knowledge. The other main difference is that a certification needs to be able to stand up to any legal challenges to the content within. So with that we got down to work.

With all the changes going on with the portfolio and even within ONTAP it was great to get together with 9 other individuals who also shared a desire to update this exam and see not only how they viewed ONTAP had changed over the last two years but also the use cases and deployment models being adopted for the technology. Over the next few days we reviewed the current question pool and then set to work writing new ones. These were then assessed as a group to see if they were relevant and of the right difficultly, to name just two of the measurements we judged each question on. It was also good to see that the questions proposed for the exam were honest and fair with no intent of trying to trick candidates.

exam1

It was both a long and rewarding week where I’m sure all in attendance learned something new. It also shone a light on the amount of work and effort that is put into constructing a certification exam by NetApp, as they understand the benefit to the candidate in undertaking the hard work preparing for, taking the exam and also the badge of honour received for the certification. I have often felt that by obtaining a certification it shows that you have a desire to know and understand the technology in question and also that you have taken the time to learn its best practices. They can help differentiate you within your company or when you apply for a new role. Just to make sure I’m up to date with what is going on I usually take an exam during NetApp Insight for my own personal benefit mainly but it helps reinforce the value I have to offer to the team when I am back at my day job.

Before we knew it Friday afternoon had rolled around and we had completed all the tasks required from the workshop, which meant that in a few weeks from now the update exam NS0-512 will go live. A big thank you goes out to NetApp Certification Program for inviting me to the workshop and also to the nine other individuals and the independent test company I had the pleasure of working with for the week. I left ready to talk SnapMirror and MetroCluster with anyone who wanted to listen. So if you get the opportunity to help write or update an exam I would highly recommend it, and before you start to contact me for help the answer is “yes it’s on the exam.”

NOTE: For more information listen to the upcoming Tech ONTAP Podcast Episode 78 NetApp Certifications NCIE featuring the NetApp A-Team

25 hours at CLEUR

Last week I was lucky enough to get a chance to attend Cisco Live in Berlin. I have been at this venue before but this is my first Cisco event and I have to say I was impressed. Hosted at the Berlin Messe it didn’t feel overly crowded yet with over 12,000 people involved in the conference it barely used up a third of the 26 halls available for events there. My reason for attending was a FlexPod round-table to be hosted jointly by people from NetApp and Cisco. I was in attendance as the voice of Arrow ECS Europe and, as the UK distributor involved in the most FlexPods, I thought it was important not only to give my feedback at this event but also to hear the messaging coming out directly from the vendors and pass this back to our reseller partners in the UK and also back to Arrow ECS.

file-28-02-2017-16-31-29

Sadly no AAA

I attended the conference in both a virtual and physical capacity; virtually reviewing the content available from keynotes and also by being there on an Explorer pass. This basically got me to everything but breakout sessions. Being immersed in the Cisco community was a refreshing experience and one I would recommend. Even without attending sessions there is a huge amount of information available to gather, not just from Cisco but also from some of their strategic partners including and not limited to Veeam, F5 and Citrix.

At the round-table it was great to hear was the rate of growth from a FlexPod perspective. A partnership just over five years old and it’s an over $7 billion business, and the number 1 integrated infrastructure. It also great to see that they are not resting on their laurels with a new CVD released that week covering how to deploy FlexPod Datacenter with Docker Datacenter for Container Management and with more in the pipeline narrowing the gap between private and hybrid clouds I would have to say that this is a partnership with plenty left in the tank.

file-28-02-2017-16-36-07

Ready for the DevOps community

I swung by the NetApp stand after and heard about another exciting FlexPod project the All Flash 3D FlexPod, and for anyone who attended the UK partner academy last June might recall a presentation on an older version of this project. We often talk about FlexPod being more than a sum of all the constituent parts and this is one case where this statement truly shines. Being used in anything from the medical profession to 4k content creation to geological applications this is a true monster and I doubt we have really scratched the surface of the areas this solution could be applicable I would suggest checking out here for more information.

It was also great to see that a MetroCluster FlexPod was running the event, by swinging by the NOC you could see the statistics in real time like the 20GB (yes Gigabytes) of internet traffic flowing around the campus supported by 968 access points (they added 75 on Tue night to improve the experience) yet with everything going on the AFF8060 never really was taxed as seen on the Graphana dashboard.

file-28-02-2017-16-37-17

Monitoring one half of the NOC AFF FlexPod

What did hit me whilst wandering around the many halls was the vast plethora of Cisco products and how this company has evolved. I knew some of different areas they but there’s so much more than the routing and switching that the business was born out of. I talked to many interesting people covering varying business units from IOT to Digital Finance Services to business transformation. If I had had more time there there’s so much more I would have like to have done, like sat down for a few hours and run through a self-passed lab, attend one of the many of what looked like sessions in the DevOps area or even give 20 minutes of my time to their charitable cause to Rise Against Hunger. One thing that hit me was this was a company with employees understanding the companies vision, it’s like an 8 person coxed crew perfectly in time with one another lifting their stroke rate above 32 still creating perfect puddles and yet not breaking a sweat. The slogan for the event was Your Time Is Now and I have to say that we are definitely in a Cisco Era.

NetApp plus Veeam

NetApp and Veeam have just announced a joint business special offer for EMEA leveraging key aspects of each other’s portfolio. I’ve been a keen supporter of Veeam technology ever since they made people sit up and take note after winning the VMware 2010 Best in Show award, and I’m pleased to see it’s still turning heads today.

Determined not to just follow the masses and their back up applications; Veeam approach the important aspect of data protection with a refreshing outlook, delivering the aptly named Veeam Availability Suite. Providing a recovery time and point objective (RTPO) of less than 15mins for all applications and data that should satisfy any business that requires 24 by 7 operations.

5 key Capabilities of the Veeam Availability Suite

And with the inclusion in their 9.5 release Veeam have added agents for Windows and Linux, an Availability Console, an Availability Orchestrator, as well as the vast number of enhancements made to the Explorers this is a product that just keeps getting better.

The E2812 from NetApp is a SAN controller with a long heritage – Acquired from LSI in March 2011 the Engenio product line running the SANtricity operating system has over 1 million units sold; and this latest release in the family by NetApp, was announced in Sept 2016 and started shipping shortly after. Supporting a new version, SANtricity v11.30, this 2U array provides connectivity to the LUNS hosted on 12 internal NL-SAS via FC, iSCSI, SAS and can grow to 180 drives or 1800TB. With over 6 nines of availability and Data Assurance support (T10 PI) the E2800 series is designed for the small to medium sized businesses seeking new ways to manage data growth over a range of mixed traditional workloads and third platform.

NetApp E2800

The combination of Veeam and the NetApp E-Series gives you an easy to use availability solution with the perfect staging area for backups. With a simple, fast and scalable storage architecture and a modern disaster recovery solution for your vSphere and Hyper-V environments this partnership will help you confidently meet today’s always on enterprise service level objectives.

With more information expected on the joint promotion of this already popular pairing of products due out over the coming days (including T&Cs); this is a special offer that is going to attract a lot of attention between now and the summer months, so please speak to your channel/ account manager for more information on how to purchase Veeam Availability Suite or Veeam Backup & Replication Enterprise or Enterprise Plus editions, together with NetApp E-Series, and get up to 10% off Veeam.

(For more information on the benefits of using Veeam and NetApp E-series please see: https://www.veeam.com/blog/tips-and-tricks-on-leveraging-netapp-e-series-arrays-in-veeam-availability-suite-designs.html )

The A700s Killing it in the Storage Market

On the 31st of Jan NetApp released a new box the A700s and it’s a game changer in more ways than you might know. On the same day they also released its SPC-1 benchmark results to show how pearly white its teeth are.

Hardware

Let’s start with the physical, this All Flash Array is a 4U chassis containing controllers and storage. NetApp has been producing integrated disk and controller systems for many years now but these have always been aimed at the SMB market. The A700s is the first designed with the enterprise market in mind and boy what a first. Housing 24 internal SSDs and expandability up to 216 per HA pair, with the ability to scale out to 12 SAN nodes or 24 NAS. This gives you some serious expandability options.

Each controller has four on-board 40Gbe ports with four PCIe slots for a vast array of additional functionality including more 40Gbe, maybe 32Gb FC, or even expanding to my personal favourite a MetroCluster.

Operating System

As expected this system is running the world’s number one branded operating system, ONTAP which provides a unified SAN and NAS architecture to meet the most demanding workloads. Consisting of anything from consolidated virtualisation, enterprise applications to design and engineering workloads, all delivered from a truly scale out architecture, ONTAP can manage up to 24 nodes as a single system with 13 PB of all flash storage.

SPC-1

The SPC-1 benchmark to quote their website – “consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations.”

This benchmark is a way to allow customers to compare various storage vendors. NetApp previously under took this benchmark back in April 2015 with the FAS8080AE and by way of comparison to how things have changed in just under two years I have put some of the more relevant results into a table. (For more detail see here and check out exec summary)

spc1table

SPC-1 Highlights

As you can see the FAS8080AE; which was before NetApp were required to adopt Gartner’s naming scheme for it to qualify as an All Flash Array; performed admirably giving us just over 685,000 IOPS at 1.2 milliseconds. This 8 node cluster was placed fifth in the top ten table but by the end of 2016 it had slowly been pushed out to 8th which is still very impressive. The most recent results published Jan 2017 show that the A700S delivered 2.4 million IOPS at roughly 0.7ms in a 12 node cluster. This huge number not only shows us the improvements in hardware that have occurred over the last couple of years but also the advances that NetApp has made with its ONTAP operating system. Even if you don’t need to scale out to 12 nodes a HA pair can deliver over 400,000 IOPS in under 0.3ms and when you consider you can stack it with 15.3TB SSDs giving you an effective capacity of a petabyte in a 4U enclosure delivering 650,000 (IOPS) in under a millisecond!

So what do these numbers actually mean? NetApp have reduced the physical capacity down from 2 full racks (84RU) to just over half a rack (26RU) whilst upping the node count by 50%. Yet by doing this they have greatly increased the throughput. So they’ve quartered your rack space requirements halved the latency and quadrupled the IOPS and this isn’t the box pushed to its max.

What’s in a name?

The S in the name has been said to represent slim but some have said it could well stand for sexy (and what a beast it is), or sport, but unlike a comparison made to hot hatchbacks, I would say this model is more the Aston Martin DB11 Intrepid Sport in Cinnabar Orange. This V12 monster is turning heads everywhere it goes, people are noticing her before they even get the chance to set eyes on her. Dropping into 3rd place in the SPC-1 is no mean feat but to do this with kit that occupies roughly half a rack is phenomenal!

The A700s is not trying to sneak around the corner no this AFA has all the capabilities we’ve come to love from ONTAP whether that be the data efficiencies with dedupe and compaction, SnapMirror, FlexClone, per Volume Encryption, Secure Multi tenancy, can form part of your Data Fabric solution, the list goes on. Remember this is an OS that keeps going from strength to strength as we can see from the addition of FlexGroup providing massively scalable next generation data containers (for more information see @NFSDudeAbides post here ) and this is hardware that marries the advances in technology beautifully.

Conclusion

All I can say is that if you are in the storage market this will have made you sit up and take note, and if you’re one of the many flash start-ups this has probably got you scared. No matter how you slice it this box delivers in all respects and is a deadly addition to any environment; just like Old MacHeath’s jack-knife.

Rise of the NGA

In a previous blog, I talked about predictable performance and how it can have a huge impact on your any business. In this blog, I’ll go into detail on another aspect of predictability within the SolidFire array.

But before we start, I’d like to address how we refer to SolidFire as a product. I’m not happy using the term array. SolidFire is so much more than a storage array, in the same way that a Ferrari GT250 California is so much more than a car. As it’s designed for the next generation data centre, I think we should be referring it to as a “next generation array” or NGA.


So let’s start by taking a look at the predictability of this NGA in terms with how it deals with failures.

One of the many reasons SolidFire does so well in the service provider space is it can deal with a wide range of possible failure scenarios. It’s so good at this, in fact, that we even refer to it as a “self-healing” system. This means SolidFire can cope with disk and node failures, hardware upgrades and replacements, and software upgrades ALL without downtime. The loss of a disk/controller/shelf initiates a fully automatic self-healing process—which, by the way, does not affect gQOS at all (for an explanation on gQOS, see my previous post).

For those of you who may be new to the technology, SolidFire as a Tier 0 storage technology does not use RAID (redundant array of independent disks) protection. Instead it uses something that is referred to as the SolidFire Helix.

Usually deployed in a “double Helix,” this core part of the operating system provides cluster-wide RAID-less data protection while avoiding single points of failure. If a failure does occur, it “self-heals” the whole cluster and restores redundancy. What does that translate to in terms of exposure? Try less than 10 minutes for a drive failure and less than 1 hour for a node failure! Now that’s next generation.

Another distinguishing feature of SolidFire is their proactive philosophy when it comes to support. Available to every customer, Active Support is divided into three key services:

•    24/7 support with immediate access to a level 3 support engineer

•    Secure Assist: remote assistance over a secure connection

•    Active IQ: real-time telemetric data and trending analysis SaaS


Active IQ is the main attraction in the Active Support toolset. It allows you to visualise problems and challenge areas at varying levels of granularity, giving you the ability to better anticipate their outcome and undertake any proactive measures. Allowing you to model for “what if…” scenarios and accurately envisage how to maximise your investment, Active IQ receives telemetric data in 10 second intervals, and allows you to perform performance modelling and historic trending with ease. You can also enable real-time, customisable alerts for what YOU want to know about. Just think of the icons on the above graphic as blades on a Swiss army knife and you get to personalise the multi-tool.

Not only can the NGA guard against data corruption in the case of hardware failures and protect during planned downtime and upgrades, it can balance its workload around the cluster it can help plan the future and reduce the risk of exposure during an outage and automatically regain redundancy to provide data availability without impacting performance.

So when you look at it the SolidFire NGA is more predictable than Nostradamus watching a Roland Emmerich film whilst listening to a metronome waiting for the sun to rise. And it already knows the answer.

Getting to grips with SolidFire

We’ve had Nike MAGs, Pepsi Max and Hover boards now we look to the data centre of the future

I have been doing more and more with SolidFire over the last few months, and I’ve had somewhat of a revelation about it. Around this time last year, I thought there was too much overlap with the FAS wing of the portfolio for NetApp to be pursuing an acquisition. To the uninformed, this may look true on paper, but it is completely different in practice. The more I learn about SolidFire, the more I am impressed by the decisions NetApp has made and the direction they are heading.

Hopefully you are aware of all the great benefits of using a SolidFire cluster within your environment, but for those of you who aren’t, I’ll sum it up in one word—predictable. This predictability extends to all features of the architecture including capacity, performance, overall health and healing, and scalability.

An initial 4 node SolidFire deployment

Let’s have a look at performance first. Starting with four nodes, you have 200K IOPS available. By adding more nodes to this cluster, you can grow predictably at 50k per node*. And that’s not even the best part. The real showstopper is SolidFire’s ability to provide you with precisely the IOPS your workload requires by assigning a policy to each volume you create. If you undertake this task via the GUI, it’s a set of three boxes that sit in the bottom half of the creation wizard asking you what your minimum, maximum, and burst requirements for this volume are. These three little text boxes are unobtrusive and easy to overlook, but they have a huge impact on what happens within your environment. By setting the minimum field, you are effectively guaranteeing the quality of service that volume gets. Think about it, “guaranteed QOS,” (gQOS, if you like). That little g added to an acronym we have used for years is a small appendage with massive importance.

Volume Creation wizard

Most other vendors in the IT industry will say that the use of QOS is merely a Band-Aid — a reactive measure—until you can fix the issue that has caused a workload to be starved or bullied. This requires you to carry out some manual intervention, not to mention the repercussions of you letting things escalate to that point where they have already had a negative impact on the business.

We need to change from this reactive methodology. Let’s start by lifting the term “quality of service” out of its drab connotations, give it a coiffured beard, skinny jeans, and a double macchiato. Let’s add a “g” to this aging acronym and turn that hipster loose on the world. gQOS is the millennial in the workplace, delivering a twenty-first-century impact on the tasks and procedures that have been stuck in a rut for years. When you hear someone use QOS ask, “Don’t you mean gQOS?” Then walk away in disgust when they look at you blankly.

With SolidFire you are able to allocate performance independent of capacity in real-time without impacting other workloads. What does this mean you may ask? No more noisy neighbours influencing the rest of the system. gQOS addresses the issue of shared resources and allows you to provide fool-proof SLAs back to the business something sought by those Enterprise organisations looking to undergo a transformational change and Service Providers with hundreds of customers on a single shared platform.

gQOS in action

So let’s start positively promoting gQOS because if it’s not guaranteed can we really call it quality? If I was in the tagline-writing business, this area of the NetApp portfolio would read something like “SolidFire Predictability Guaranteed.”

*The SF19210 adds 100K per node.

Grays Sports Almanac image courtesy of Firebox.com