Painting a Vanilla Sky

Expanding the NetApp Hybrid Cloud

During the first general session at NetApp Insight 2016 in Las Vegas, George Kurian, CEO (and a fascinating person to listen to), stated that “NetApp are the fastest growing SAN vendor and are also the fastest growing all-flash array vendor.” This is superb news for any hardware company, but for NetApp, this isn’t enough. He is currently leading the company’s transformation into one that serves you, the customer, in this new era of IT while addressing how you want to buy and consume IT. NetApp are addressing this with the Data Fabric.

If you need a better understanding of the Data Fabric, I would strongly suggest you look at this great two-part post from @TechStringy (part 1 here and part 2 here).

Back in 2001, Cameron Crowe released a film starring Tom Cruise called “Vanilla Sky.” In this, the main protagonist suffers a series of unfortunate events and rather than face up to them, he decides to have himself put in stasis until those problems could be resolved. Well, if managing data within varying cloud scenarios was his problem, then announcements made by NetApp earlier this week would mean he could be brought back and stop avoiding the issues. So let’s take a look at some of what was announced:

NetApp Cloud Sync: This is a service offering that moves and continuously syncs data between on-prem and S3 cloud storage. For those of you who attended this year’s Insight in Las Vegas, this was the intriguing demo given by Joe CaraDonna illustrating how NASA is interacting with the Mars Rover Curiosity. Joe showed how information flows back to Earth via “JPL … the hub of mankind’s only intergalactic network,” all in an automated, validated, and predictably-secure manner and how they can realise great value from that data. Cloud Sync not only allows you to move huge amounts of data quickly into the cloud, but it also gives you the ability to utilise the elastic compute of AWS, which is great if you are looking to carry out some CPU-intensive workloads like Map Reduce. If you are interested in what you have read or seen so far, head over to the here where you can now and take advantage of the 30-day free trial.

Data Fabric Solution for Cloud Backup (ONTAP to AltaVault to Cloud): For those of you who saw the presentation at Insight 2015, this is the backing up of FAS via AltaVault using Snap Center. This interaction of portfolio items gives us the ability to provide end-to-end backups of NAS data while enabling single-file restores via the snapshot catalogue function. This service has a tonne of built-in policies to choose from—simply drag and drop items to get it configured. AltaVault also now has the ability to help with seeding of your backup via the use of an AWS Snowball device (or up to ten daisy-chained together as a single seeding target) it’s never been easier to get your data into and manage in the cloud.

NetApp Cloud Control for Microsoft Office 365: This tool extends data protection, security, and compliance to your Office 365 environment to protect you from cyber-attacks and breaches in the cloud. It allows you to back up your Exchange SharePoint and OneDrive for business and vault a copy to another location, which could be an on-prem, nearby, or cloud environment, depending on your disaster recovery and business continuity policies. This is a great extension of the Data Fabric message, as we can now utilise FAS, and or ONTAP Cloud, and or AltaVault, and StorageGRID as backup targets for production environments running wherever you deem appropriate for that point in time.

NetApp Private Storage for Cloud: For customers that are after an OPEX model and see the previous NetApp Private Storage route as an inhibitor to this (due to the fact that they need to source everything themselves), this is where NPS-as-a-Service comes into its own. It gives customers the ability to approach a single source and acquire what they need to provide an NPS resource back to their company. A solution offering for NPS for Cloud is currently offered by Arrow ECS in the U.S. and is coming to Europe soon. This offering helps you create a mesh between storage systems and various clouds, giving you the ability to control where your data resides while providing the level of performance you want to the cloud compute of your choice.

ONTAP Cloud for Microsoft Azure: This is the second software-only data management IaaS offering for hyper-scalers being added to the NetApp portfolio. ONTAP Cloud gives customers the ability to apply all that lovely data management functionality that has drawn people to NetApp FAS for years layered on top of blob storage from your cloud provider. You get the great storage efficiencies and multi-protocol support with the ease of “drag and drop,” and you can manage replication to and from this software-defined storage appliance with the ability to encrypt the data whilst it resides in the cloud. This service has a variety of use cases, from providing software development or production with storage controls to utilizing it as a disaster recovery entity.

So if we are to look at an overview of the data Fabric now we can see that ability to move data around dependant on business requirements.

During his presentation at Insight 2016 George Kurian also said, “Every one of NetApp’s competitors is constructing the next data silo, or prison, from which data cannot escape.” Hopefully by Implementing the Data Fabric NetApp customers can build with confidence the IT business model which facilities a flow of information within their organisation so that can grow and adapt to meet their ever-changing IT needs.

The Data Fabric is the data management architecture for the next era of IT, and NetApp intend to lead that era. With this recent enhancement of the Data Fabric and NetApp’s portfolio, there is no more need to be shouting “Tech Support!” Instead, we can all be Monet and paint a beautiful Vanilla Sky.

Advertisements

The NetApp, They Are A-Changin’

 

A lot of people criticize NetApp for not moving with the times. Some of the newer start-ups like to claim that NetApp is a legacy company not in touch with today’s marketplace. Yet we all know the company has a rich and deep heritage which spans nearly a quarter of a century with over 20 of those years spent on the NASDAQ; so they must be doing something right.

They also like to say NetApp are not in touch with today’s data centre requirements. I would question that. Today NetApp launches the start of a whole new line for the FAS and All Flash FAS side of the portfolio. They have announced three new FAS models: the FAS2600, the FAS8200, and the FAS9000. And on the all-flash side, another two new models. These systems are designed with the data centre of the future in mind, and these enterprise products again deliver an industry first (NetApp were the first to support 15.3TB SSD drives), with next-generation networking in the form of 40Gbe and 32GB FC.

The FAS9000 is the new flagship of the line, and introduces a new modular design similar to what we have seen Cisco adopt to great success in the UCS line. This system has 10 PCI slots per controller which, when combined with the ability of either of the next-gen networking previously mentioned, gives HUGE amounts of bandwidth to either flash and NL-SAS drives. It also has a dedicated slot for NVMe SSD to help with read caching (aka Flash Cache) for those workloads that benefit from a read boost, and has the ability to swap out the NVRAM and controller modules separately, which is to allow for expansion upgrades in the years to come. Here are some of the numbers associated with the FAS9000: it can scale up to 14 PB (Petabytes) per high availability pair (HA pair) or up to 172PB for a 24 node (12 HA pairs) in a NAS environment. Yes, that’s up to 172PB of flash storage managed as a single entity!!

They also announced the arrival of the FAS8200, the new workhorse for enterprise workloads, delivering six 9s or greater of availability. It carries 256GB of RAM—that’s equivalent to today’s FAS8080, or 4x of what’s found in a FSA8040—with 1TB of NVMe M.2 Flash Cache as standard (which frees up a PCIe slot) and can scale to 48TB of flash per HA pair when combined with Flash Pool technology. The FAS8200 also has 4x UTA2 and 2x 10T ports on board. This system is ready to go, and if you need to add 40Gbe or 32Gb FC, this chassis will support the addition of those via cards. This 3U chassis will support up to 4.8PB and can scale out to 57PB, meeting any multi-protocol or multi-application workload requirements.

Another new member to the FAS family is the FAS2600, which replaces the ever popular FAS2500 series. For this market space, disk and controllers contained within the same chassis are prevalent, and the trend that started with the original FAS2000 (maybe even the good ole StoreVault) is still here today, with the FAS2600 offering similar options as the FAS2500 but now with SAS3 support. We have the FAS2620, which supports large form factor drives, whilst the FAS2650 supports the smaller variants. Something that is new to the FAS2000 series is the inclusion of Flash Cache, and the FAS2600 has received the gift of NVMe with 1TB standard per HA pair. Also, changes to the networking have been made. No longer do we have dedicated Gbe ports. Instead, they have change them to 10Gbe, which are for cluster interconnects, scaling up to 8 nodes in this range, and can now use all 4 UTA2 ports for data connectivity. And if you still require 1Gbe, it can be achieved via SFPs for these UTA2 ports (X6567-R6 for optical and X6568-R6 for RJ45).

NetApp, a company that, for some, may not be known for its flash portfolio, yet has sold north of 575PB of the stuff, have also announced two new controllers for the All-Flash Array (AFA) space; the A300 and the A700. These systems are designed purely for flash media, and it shows with the A300 supporting 256GB of RAM whilst the A700 runs with a terabyte of RAM (1024GB)! This huge jump will allow for a lot more processing from the 40Gb and 32Gb networks whilst still delivering microsecond response times. For this ultra-low latency, we are looking at either products like the Brocade X6 director for FC or Cisco’s 3132Q-V for Ethernet to meet these ever-increasing demands.

These new systems will support the world’s number one storage OS: ONTAP version 9.1 and beyond, with this new release also announced today. ONTAP 9.1 in itself has some improvements over the previous versions. We have seen some major boosts to performance, especially in the SME space with the FAS2600 gaining a 200% performance improvement over the previous generation with the FAS8200, and with the FAS9000, about 50% better than their predecessor. The new stellar performer in AFA space is the A700. This new AFA has been reported to handle practically double the workload of an AFF8080 running an Oracle database which is another huge leap in performance.

There are a couple of other nice new features in ONTAP 9.1 which I will mention here, but won’t go into too much detail on. The first would be FlexGroups, which is a single namespace spanning multiple controllers scaling all the way to 20PB or 400 billion files (think infinite volumes but done a lot better). Then there’s cloud tiering: the ability of an AFA to utilise an S3 object store for its cold data—now that’s H. O. T. HOT! ONTAP 9.1 also brings us volume-level encryption, which will work with any type of drive and only encrypt the data that needs it. The Data Fabric also gets an upgrade, with the inclusion of ONTAP Cloud for Azure, which has been a while behind the cloud version for AWS but is worth the wait. And finally we also get the ability with the Enterprise products running ONTAP 9.1 to scale to 12 nodes within a single SAN cluster(that’s the ability to add another 4 nodes).

On another note, NetApp did launch another new box just a couple of weeks ago; the new E2800 sporting the SANtricity OS 8.30, also available in AFA variants and delivering over 300,000 IOPS in a box designed for small and mid-sized businesses. Which like the SolidFire side of the portfolio should not be over looked if it meets all of your desired requirements.

So come gather round people, writers and critics alike. Take a good look. I think we can safely say, that NetApp is a keeping, itself in the game and delivering platforms that go beyond tomorrow’s requirements.

But the big question everyone wants to know is, “What does it look like?” For the answer to that, you should be at NetApp Insight!

ONTAP 9 A new flavour with plenty of features

 

Name change

NetApp recently announce the upcoming release of their flagship operating system for their FAS and AFF product lines. ONTAP 9 as you can glean from the name is the ninth iteration for this OS which like a fine wine keeps getting better with age. Some of you will also have noticed the simplification of the name, no more clustered, or data, just simply ONTAP. The reality is clustering is the standard way to deploy controllers which store data, so it’s not really necessary to repeat that in the name, a bit like Ikea telling you the things you can put inside a Kullen (or Hemnes or Trysil which are all improvements over the Hurdal). But the most important thing about this change is the numeral at the end, 9. This is the next new major release of the operating system providing all the features that were available in 7-mode but also so much more.

So now that we have got that out of the way let’s see what else
has changed….

New features

Let’s take a quick look at some of the new features; so grab a pen (or for the millennials your phone camera):

  • Firstly I think I should mention you can now get ONTAP in three different varieties dependant on your use case. The appliance based version, ONTAP; the Hyperscaler version, ONTAP Cloud; and the software only version ONTAP Select. This should allow for management of data where ever it exists.
  • SnapLock – Yes the feature that everybody wanted to know where it had gone when comparing cDOT with Data ONTAP 7-mode, yet less than 5% of systems worldwide used (according to ASUP) is back. WORM functionality to meet retention and compliance requirements.
  • Compaction – A storage efficiency technology that when
    combined with NetApp’s inline deduplication and compression allows you to fit even more into each storage block. More on this technology in a later post.
  • MetroCluster – Ability to scale out to up to 8 nodes. We can now have 1, 2 or 4 nodes per site as supported configurations. NetApp have also added the ability to have non mirrored aggregates on a MetroCluster.
  • On-board Key manager – which removes the need for an off box key manager system when encrypting data.
  • Windows Workgroups – Another feature making a return is the ability to setup a CIFS/SMB workgroup so now we don’t need an Active Directory infrastructure to carry out simple file sharing.
  • RAID –TEC – Triple Erasure Coding expanding on the protection provided by RAID –DP. Allowing us to add triple parity support to our RAID groups, this technology is going to be crucial as we expand to SATA drives in excess on 8TB and SSD beyond 16TB.
  • 15TB SSD support – Yes you read that right, NetApp are one of, if not the first, major storage vendor to bring you 15.3TB SSDs to market. We can utilise these with an AFF8080 giving you 1PB of guaranteed effective capacity in a 2U disk shelf!!! To continue that train of thought we could scale out to 367TB effective AFF capacity within a single cluster. This will radically change the way people think about and design datacentres of the future. By shrinking the required hardware footprint we in turn reduce the power and cooling requirements, lowering the overall OPEX for the datacentres of the future; this will lead to a hugely reduced timeframe for return on investment on this technology, which in turn will drive adoption.
  • AFF deployments – With ONTAP 9 NetApp are introducing the ability to rapidly deploy applications to use the storage within 10 minutes with one simple input screen and this wizard follows all the best practices for the selected application.

Upgrade concerns

One of the worries people previously had with regards to NetApp FAS systems was how to upgrade to a new version of the OS for your environment, especially if you had systems at both primary and DR.

Version independent SnapMirror which arrived with 8.3 is great if you have a complex system of bidirectional water-falling relationships as planning an upgrade prior to this
needed an A1 sized PERT chart to plan the event. Now NetApp allow for an automated rolling upgrade around a cluster, it should mean for those customers out there who have gone for a scale out approach to tackling their storage requirements (and I salute you on your choice) it’s the same steps if you have 2 or 24 controllers. Today you can undertake this with three commands for complete cluster upgrades which is such a slick process, heck you can even call up the API from within a PowerShell script.

How does it look?

If you look below I have a few screen shots showing some of the new interface including the new performance statics that OnCommand System Manager can now display.

Notice the new menu along the top. This helps to make moving around a lot easier.

Here we can see some of the performance figures for a cluster, as this is a sim I didn’t really drive too much IO at it, but it will be very useful once in production giving you insight to how your cluster is performing at 15 second intervals.

Another nice feature of the latest release is the search ability, which I think will come into its own in larger multi-protocol installations of several PB, helping hone in on the resource you are after quicker.

First impressions

For this article I am using a version in a lab environment and from its slick new graphical interface (see above) to the huge leaps made under the covers this OS keeps getting stronger. The GUI is fast to load even on a sim, the wizards methodical, the layout intuitive and once you start using this and have to jump back onto an 8.x version, as I did, you will appreciate the subtle differences and refinements that have gone into ONTAP 9.

Overall takeaways

With the advent of ONTAP 9 NetApp have also announce a 6 month cadence for future releases making it easier to plan for upgrades and improvements which is good news for those shops who like to stay at the forefront of technology. The inclusion of the features above and the advancements made under the cover should hopefully illustrate to you that NetApp is not a company who rests on their laurels but strives for innovation. The ability to keep adding more and more features yet making it simpler to manage, monitor and understand is a remarkable trait; and with this new major software release we get a great understanding of what this company hopes to achieve in the coming years.

This is also
an exciting upgrade for the Data fabric, as mentioned above ONTAP 9 is now available in 3 separate variants – engineered for FAS & AFF; ONTAP Select for Software Defined Storage currently running on top of vSphere or KVM; and ONTAP Cloud running in AWS and soon Azure. Businesses can now take even greater control of their data as they move to a bimodal method of IT deployment. As more and more people move to a hybrid multi-cloud model we will see people adopting these three options in varying amounts to provide the data management and functionality that they will require. As companies mix all three variations we get
what I like to call the Neapolitan Effect, which is probably one of the best of all ice-cream flavours; to their storage strategy, delivering the very best data storage and management wherever needed which is thanks to the ability of ONTAP to run simply anywhere.

So go out and download a copy today!

Are you a Major Boothroyd?

As IT grows and changes over the years it becomes apparent that we need a solution that is as adaptable as the game itself.

For years this game revolved around “FC SAN and nothing else will do” and “don’t even talk to me if you don’t support FC”.  Then iSCSI planted the flag for Ethernet protocols, drive technologies advanced and capacities got denser. So that today we find ourselves supporting a multitude of protocols in an ever increasingly complex environments. But is this now enough of a playbook, and are these the plays you want to be running??

We are all aware of Moore’s law rate change so how do we keep up and adapt with this? We are expected to stay ahead of the game but sometimes just trying to keep up is a struggle I itself.

To borrow a quote from a colleague, Alex Nicholson, “You date your servers but marry your storage.” This may seem clichéd but when you think about it, it does cover the basics, compute is transitory while data is persistent. It’s quick and easy to migrate an application between physical servers so when the time comes you can take advantage of any new advancements in processing power and RAM speeds, yet storage migration takes time and because of this there are policies in place so that more though is given to the process before data is transferred.

Live migrate and storage Vmotion are two effective ways to move data when it’s a virtual workload but what happens if it’s not, how do we move the information. The answer usually involves many meetings and POCs before the job is undertaken, which adds weeks or months to the task and by that time the goal posts may have moved. Surely there is an easier way to do this and one that doesn’t require me dragging it up to the server only to write it somewhere else.

Now let’s throw into the mix the ability to tier the data i.e. boost or constrict performance, and just when you think you have all that covered your CTO drafts a directive that we need to be “utilising the cloud more” after he read a recent Computerworld Forecast study showed that the highest rated, single most important project that  IT depts. are working on right now was Cloud; with KPMG cloud survey report also showing that 49% are using cloud to transform their business to drive cost efficiencies and he doesn’t want your business left behind. What do you do?

There are a couple of methods you can follow to help resolve all of this. One way is to go to your current supplier and ask for them to ship you several of their current boxes across their portfolio and pray to the data gods that it sates the business’ appetite until the next Megalomaniac of an application appears. Just as you wouldn’t let your insurance policy roll onto a new contract without checking what the best available deal is you wouldn’t do it with IT.  So another way is to research, evaluate, (and one I know I’m guilty of) procrastinate, but are you getting the job done? Or are you setting on the fence await a solution to magically present itself?

Today’s application builders and managers are seen by the business as saviours of the world with a license to kill, and they are being funded by the business to do so, with a conservative figure of 28% of businesses IT spend not being controlled by IT departments these LOBs are being armed to take the fight to go up against the “Ernst Stavro Blofeld” in their sector. So how can you help?

What if your IT investment made you more agile, by incorporating measurable functionality like storage efficiencies high availability and non-disruptive operations all the while giving the flexibility to craft, create and customise a hybrid cloud on your own terms as you envision it for now and future demands on the infrastructure?

What if this solution had best of breed data protection fully integrated in to the robust portfolio that not only reduced risk but cut the costs associated and increased reaction times to meet the ever demanding SLAs being stipulated by businesses?

What if this solution could incorporate all flash solutions and its benefits for key applications yet avoid the SAN Island in the data lake scenario, and actually be seamlessly amalgamated into the current infrastructure?

What if all of this had a measurable ROI that delivered in months not years?

This isn’t some DB10 prototype, with some “aftermarket” upgrades, this is available today and out of the box from NetApp.

And that box can be a virtual one as well. And when I say virtual I don’t just mean an OnPrem hypervisor, you can also get this from your Hyperscaler’s market place and deploy in minutes! And when you consider the figures from State of the Cloud Rightscale 2015 reported that 55% of Enterprises are looking at a hybrid cloud strategy and a further 10% at a single public strategy this has to be the tool of choice. Not to mention with the next version of Cloud ONTAP providing encryption, you know your data is protected.

And now here’s the clever bit – we’re not talking about creating an archipelago within an uncontrollable sea of data with a plethora of management tools. We are talking about a SINGLE OS with SINGLE management, to the point that we can now drag and drop a relationship from flash to disk to cloud. We are talking about seem less hybrid cloud architecture while remaining totally in control of your data.  This is a differentiated approach to the hybrid cloud. This is the data fabric.

Cloud may not be right for everybody and to borrow a quote from Dave Hitz when he is questioned “IS the cloud right for MY business?” he will reply with “I. Don’t. Know.” Because every business and their needs are different. But having that ability in the bag for when you may have a need or want for it is surely going to help you sleep a bit easier at night, knowing a deployment will take you minutes not months if someone comes knocking.

So now you have all the bases covered.  Before you know it line of business managers will be coming to you with a complex data management problems and before they know it you are providing them with whatever they want from an exploding key-fob to wrist-mounted dart guns to a jetpack. Just make sure they “do bring it back in one piece!”