Painting a Vanilla Sky

Expanding the NetApp Hybrid Cloud

During the first general session at NetApp Insight 2016 in Las Vegas, George Kurian, CEO (and a fascinating person to listen to), stated that “NetApp are the fastest growing SAN vendor and are also the fastest growing all-flash array vendor.” This is superb news for any hardware company, but for NetApp, this isn’t enough. He is currently leading the company’s transformation into one that serves you, the customer, in this new era of IT while addressing how you want to buy and consume IT. NetApp are addressing this with the Data Fabric.

If you need a better understanding of the Data Fabric, I would strongly suggest you look at this great two-part post from @TechStringy (part 1 here and part 2 here).

Back in 2001, Cameron Crowe released a film starring Tom Cruise called “Vanilla Sky.” In this, the main protagonist suffers a series of unfortunate events and rather than face up to them, he decides to have himself put in stasis until those problems could be resolved. Well, if managing data within varying cloud scenarios was his problem, then announcements made by NetApp earlier this week would mean he could be brought back and stop avoiding the issues. So let’s take a look at some of what was announced:

NetApp Cloud Sync: This is a service offering that moves and continuously syncs data between on-prem and S3 cloud storage. For those of you who attended this year’s Insight in Las Vegas, this was the intriguing demo given by Joe CaraDonna illustrating how NASA is interacting with the Mars Rover Curiosity. Joe showed how information flows back to Earth via “JPL … the hub of mankind’s only intergalactic network,” all in an automated, validated, and predictably-secure manner and how they can realise great value from that data. Cloud Sync not only allows you to move huge amounts of data quickly into the cloud, but it also gives you the ability to utilise the elastic compute of AWS, which is great if you are looking to carry out some CPU-intensive workloads like Map Reduce. If you are interested in what you have read or seen so far, head over to the here where you can now and take advantage of the 30-day free trial.

Data Fabric Solution for Cloud Backup (ONTAP to AltaVault to Cloud): For those of you who saw the presentation at Insight 2015, this is the backing up of FAS via AltaVault using Snap Center. This interaction of portfolio items gives us the ability to provide end-to-end backups of NAS data while enabling single-file restores via the snapshot catalogue function. This service has a tonne of built-in policies to choose from—simply drag and drop items to get it configured. AltaVault also now has the ability to help with seeding of your backup via the use of an AWS Snowball device (or up to ten daisy-chained together as a single seeding target) it’s never been easier to get your data into and manage in the cloud.

NetApp Cloud Control for Microsoft Office 365: This tool extends data protection, security, and compliance to your Office 365 environment to protect you from cyber-attacks and breaches in the cloud. It allows you to back up your Exchange SharePoint and OneDrive for business and vault a copy to another location, which could be an on-prem, nearby, or cloud environment, depending on your disaster recovery and business continuity policies. This is a great extension of the Data Fabric message, as we can now utilise FAS, and or ONTAP Cloud, and or AltaVault, and StorageGRID as backup targets for production environments running wherever you deem appropriate for that point in time.

NetApp Private Storage for Cloud: For customers that are after an OPEX model and see the previous NetApp Private Storage route as an inhibitor to this (due to the fact that they need to source everything themselves), this is where NPS-as-a-Service comes into its own. It gives customers the ability to approach a single source and acquire what they need to provide an NPS resource back to their company. A solution offering for NPS for Cloud is currently offered by Arrow ECS in the U.S. and is coming to Europe soon. This offering helps you create a mesh between storage systems and various clouds, giving you the ability to control where your data resides while providing the level of performance you want to the cloud compute of your choice.

ONTAP Cloud for Microsoft Azure: This is the second software-only data management IaaS offering for hyper-scalers being added to the NetApp portfolio. ONTAP Cloud gives customers the ability to apply all that lovely data management functionality that has drawn people to NetApp FAS for years layered on top of blob storage from your cloud provider. You get the great storage efficiencies and multi-protocol support with the ease of “drag and drop,” and you can manage replication to and from this software-defined storage appliance with the ability to encrypt the data whilst it resides in the cloud. This service has a variety of use cases, from providing software development or production with storage controls to utilizing it as a disaster recovery entity.

So if we are to look at an overview of the data Fabric now we can see that ability to move data around dependant on business requirements.

During his presentation at Insight 2016 George Kurian also said, “Every one of NetApp’s competitors is constructing the next data silo, or prison, from which data cannot escape.” Hopefully by Implementing the Data Fabric NetApp customers can build with confidence the IT business model which facilities a flow of information within their organisation so that can grow and adapt to meet their ever-changing IT needs.

The Data Fabric is the data management architecture for the next era of IT, and NetApp intend to lead that era. With this recent enhancement of the Data Fabric and NetApp’s portfolio, there is no more need to be shouting “Tech Support!” Instead, we can all be Monet and paint a beautiful Vanilla Sky.

The NetApp, They Are A-Changin’

 

A lot of people criticize NetApp for not moving with the times. Some of the newer start-ups like to claim that NetApp is a legacy company not in touch with today’s marketplace. Yet we all know the company has a rich and deep heritage which spans nearly a quarter of a century with over 20 of those years spent on the NASDAQ; so they must be doing something right.

They also like to say NetApp are not in touch with today’s data centre requirements. I would question that. Today NetApp launches the start of a whole new line for the FAS and All Flash FAS side of the portfolio. They have announced three new FAS models: the FAS2600, the FAS8200, and the FAS9000. And on the all-flash side, another two new models. These systems are designed with the data centre of the future in mind, and these enterprise products again deliver an industry first (NetApp were the first to support 15.3TB SSD drives), with next-generation networking in the form of 40Gbe and 32GB FC.

The FAS9000 is the new flagship of the line, and introduces a new modular design similar to what we have seen Cisco adopt to great success in the UCS line. This system has 10 PCI slots per controller which, when combined with the ability of either of the next-gen networking previously mentioned, gives HUGE amounts of bandwidth to either flash and NL-SAS drives. It also has a dedicated slot for NVMe SSD to help with read caching (aka Flash Cache) for those workloads that benefit from a read boost, and has the ability to swap out the NVRAM and controller modules separately, which is to allow for expansion upgrades in the years to come. Here are some of the numbers associated with the FAS9000: it can scale up to 14 PB (Petabytes) per high availability pair (HA pair) or up to 172PB for a 24 node (12 HA pairs) in a NAS environment. Yes, that’s up to 172PB of flash storage managed as a single entity!!

They also announced the arrival of the FAS8200, the new workhorse for enterprise workloads, delivering six 9s or greater of availability. It carries 256GB of RAM—that’s equivalent to today’s FAS8080, or 4x of what’s found in a FSA8040—with 1TB of NVMe M.2 Flash Cache as standard (which frees up a PCIe slot) and can scale to 48TB of flash per HA pair when combined with Flash Pool technology. The FAS8200 also has 4x UTA2 and 2x 10T ports on board. This system is ready to go, and if you need to add 40Gbe or 32Gb FC, this chassis will support the addition of those via cards. This 3U chassis will support up to 4.8PB and can scale out to 57PB, meeting any multi-protocol or multi-application workload requirements.

Another new member to the FAS family is the FAS2600, which replaces the ever popular FAS2500 series. For this market space, disk and controllers contained within the same chassis are prevalent, and the trend that started with the original FAS2000 (maybe even the good ole StoreVault) is still here today, with the FAS2600 offering similar options as the FAS2500 but now with SAS3 support. We have the FAS2620, which supports large form factor drives, whilst the FAS2650 supports the smaller variants. Something that is new to the FAS2000 series is the inclusion of Flash Cache, and the FAS2600 has received the gift of NVMe with 1TB standard per HA pair. Also, changes to the networking have been made. No longer do we have dedicated Gbe ports. Instead, they have change them to 10Gbe, which are for cluster interconnects, scaling up to 8 nodes in this range, and can now use all 4 UTA2 ports for data connectivity. And if you still require 1Gbe, it can be achieved via SFPs for these UTA2 ports (X6567-R6 for optical and X6568-R6 for RJ45).

NetApp, a company that, for some, may not be known for its flash portfolio, yet has sold north of 575PB of the stuff, have also announced two new controllers for the All-Flash Array (AFA) space; the A300 and the A700. These systems are designed purely for flash media, and it shows with the A300 supporting 256GB of RAM whilst the A700 runs with a terabyte of RAM (1024GB)! This huge jump will allow for a lot more processing from the 40Gb and 32Gb networks whilst still delivering microsecond response times. For this ultra-low latency, we are looking at either products like the Brocade X6 director for FC or Cisco’s 3132Q-V for Ethernet to meet these ever-increasing demands.

These new systems will support the world’s number one storage OS: ONTAP version 9.1 and beyond, with this new release also announced today. ONTAP 9.1 in itself has some improvements over the previous versions. We have seen some major boosts to performance, especially in the SME space with the FAS2600 gaining a 200% performance improvement over the previous generation with the FAS8200, and with the FAS9000, about 50% better than their predecessor. The new stellar performer in AFA space is the A700. This new AFA has been reported to handle practically double the workload of an AFF8080 running an Oracle database which is another huge leap in performance.

There are a couple of other nice new features in ONTAP 9.1 which I will mention here, but won’t go into too much detail on. The first would be FlexGroups, which is a single namespace spanning multiple controllers scaling all the way to 20PB or 400 billion files (think infinite volumes but done a lot better). Then there’s cloud tiering: the ability of an AFA to utilise an S3 object store for its cold data—now that’s H. O. T. HOT! ONTAP 9.1 also brings us volume-level encryption, which will work with any type of drive and only encrypt the data that needs it. The Data Fabric also gets an upgrade, with the inclusion of ONTAP Cloud for Azure, which has been a while behind the cloud version for AWS but is worth the wait. And finally we also get the ability with the Enterprise products running ONTAP 9.1 to scale to 12 nodes within a single SAN cluster(that’s the ability to add another 4 nodes).

On another note, NetApp did launch another new box just a couple of weeks ago; the new E2800 sporting the SANtricity OS 8.30, also available in AFA variants and delivering over 300,000 IOPS in a box designed for small and mid-sized businesses. Which like the SolidFire side of the portfolio should not be over looked if it meets all of your desired requirements.

So come gather round people, writers and critics alike. Take a good look. I think we can safely say, that NetApp is a keeping, itself in the game and delivering platforms that go beyond tomorrow’s requirements.

But the big question everyone wants to know is, “What does it look like?” For the answer to that, you should be at NetApp Insight!