ONTAP 9 A new flavour with plenty of features


Name change

NetApp recently announce the upcoming release of their flagship operating system for their FAS and AFF product lines. ONTAP 9 as you can glean from the name is the ninth iteration for this OS which like a fine wine keeps getting better with age. Some of you will also have noticed the simplification of the name, no more clustered, or data, just simply ONTAP. The reality is clustering is the standard way to deploy controllers which store data, so it’s not really necessary to repeat that in the name, a bit like Ikea telling you the things you can put inside a Kullen (or Hemnes or Trysil which are all improvements over the Hurdal). But the most important thing about this change is the numeral at the end, 9. This is the next new major release of the operating system providing all the features that were available in 7-mode but also so much more.

So now that we have got that out of the way let’s see what else
has changed….

New features

Let’s take a quick look at some of the new features; so grab a pen (or for the millennials your phone camera):

  • Firstly I think I should mention you can now get ONTAP in three different varieties dependant on your use case. The appliance based version, ONTAP; the Hyperscaler version, ONTAP Cloud; and the software only version ONTAP Select. This should allow for management of data where ever it exists.
  • SnapLock – Yes the feature that everybody wanted to know where it had gone when comparing cDOT with Data ONTAP 7-mode, yet less than 5% of systems worldwide used (according to ASUP) is back. WORM functionality to meet retention and compliance requirements.
  • Compaction – A storage efficiency technology that when
    combined with NetApp’s inline deduplication and compression allows you to fit even more into each storage block. More on this technology in a later post.
  • MetroCluster – Ability to scale out to up to 8 nodes. We can now have 1, 2 or 4 nodes per site as supported configurations. NetApp have also added the ability to have non mirrored aggregates on a MetroCluster.
  • On-board Key manager – which removes the need for an off box key manager system when encrypting data.
  • Windows Workgroups – Another feature making a return is the ability to setup a CIFS/SMB workgroup so now we don’t need an Active Directory infrastructure to carry out simple file sharing.
  • RAID –TEC – Triple Erasure Coding expanding on the protection provided by RAID –DP. Allowing us to add triple parity support to our RAID groups, this technology is going to be crucial as we expand to SATA drives in excess on 8TB and SSD beyond 16TB.
  • 15TB SSD support – Yes you read that right, NetApp are one of, if not the first, major storage vendor to bring you 15.3TB SSDs to market. We can utilise these with an AFF8080 giving you 1PB of guaranteed effective capacity in a 2U disk shelf!!! To continue that train of thought we could scale out to 367TB effective AFF capacity within a single cluster. This will radically change the way people think about and design datacentres of the future. By shrinking the required hardware footprint we in turn reduce the power and cooling requirements, lowering the overall OPEX for the datacentres of the future; this will lead to a hugely reduced timeframe for return on investment on this technology, which in turn will drive adoption.
  • AFF deployments – With ONTAP 9 NetApp are introducing the ability to rapidly deploy applications to use the storage within 10 minutes with one simple input screen and this wizard follows all the best practices for the selected application.

Upgrade concerns

One of the worries people previously had with regards to NetApp FAS systems was how to upgrade to a new version of the OS for your environment, especially if you had systems at both primary and DR.

Version independent SnapMirror which arrived with 8.3 is great if you have a complex system of bidirectional water-falling relationships as planning an upgrade prior to this
needed an A1 sized PERT chart to plan the event. Now NetApp allow for an automated rolling upgrade around a cluster, it should mean for those customers out there who have gone for a scale out approach to tackling their storage requirements (and I salute you on your choice) it’s the same steps if you have 2 or 24 controllers. Today you can undertake this with three commands for complete cluster upgrades which is such a slick process, heck you can even call up the API from within a PowerShell script.

How does it look?

If you look below I have a few screen shots showing some of the new interface including the new performance statics that OnCommand System Manager can now display.

Notice the new menu along the top. This helps to make moving around a lot easier.

Here we can see some of the performance figures for a cluster, as this is a sim I didn’t really drive too much IO at it, but it will be very useful once in production giving you insight to how your cluster is performing at 15 second intervals.

Another nice feature of the latest release is the search ability, which I think will come into its own in larger multi-protocol installations of several PB, helping hone in on the resource you are after quicker.

First impressions

For this article I am using a version in a lab environment and from its slick new graphical interface (see above) to the huge leaps made under the covers this OS keeps getting stronger. The GUI is fast to load even on a sim, the wizards methodical, the layout intuitive and once you start using this and have to jump back onto an 8.x version, as I did, you will appreciate the subtle differences and refinements that have gone into ONTAP 9.

Overall takeaways

With the advent of ONTAP 9 NetApp have also announce a 6 month cadence for future releases making it easier to plan for upgrades and improvements which is good news for those shops who like to stay at the forefront of technology. The inclusion of the features above and the advancements made under the cover should hopefully illustrate to you that NetApp is not a company who rests on their laurels but strives for innovation. The ability to keep adding more and more features yet making it simpler to manage, monitor and understand is a remarkable trait; and with this new major software release we get a great understanding of what this company hopes to achieve in the coming years.

This is also
an exciting upgrade for the Data fabric, as mentioned above ONTAP 9 is now available in 3 separate variants – engineered for FAS & AFF; ONTAP Select for Software Defined Storage currently running on top of vSphere or KVM; and ONTAP Cloud running in AWS and soon Azure. Businesses can now take even greater control of their data as they move to a bimodal method of IT deployment. As more and more people move to a hybrid multi-cloud model we will see people adopting these three options in varying amounts to provide the data management and functionality that they will require. As companies mix all three variations we get
what I like to call the Neapolitan Effect, which is probably one of the best of all ice-cream flavours; to their storage strategy, delivering the very best data storage and management wherever needed which is thanks to the ability of ONTAP to run simply anywhere.

So go out and download a copy today!

Are you a Major Boothroyd?

As IT grows and changes over the years it becomes apparent that we need a solution that is as adaptable as the game itself.

For years this game revolved around “FC SAN and nothing else will do” and “don’t even talk to me if you don’t support FC”.  Then iSCSI planted the flag for Ethernet protocols, drive technologies advanced and capacities got denser. So that today we find ourselves supporting a multitude of protocols in an ever increasingly complex environments. But is this now enough of a playbook, and are these the plays you want to be running??

We are all aware of Moore’s law rate change so how do we keep up and adapt with this? We are expected to stay ahead of the game but sometimes just trying to keep up is a struggle I itself.

To borrow a quote from a colleague, Alex Nicholson, “You date your servers but marry your storage.” This may seem clichéd but when you think about it, it does cover the basics, compute is transitory while data is persistent. It’s quick and easy to migrate an application between physical servers so when the time comes you can take advantage of any new advancements in processing power and RAM speeds, yet storage migration takes time and because of this there are policies in place so that more though is given to the process before data is transferred.

Live migrate and storage Vmotion are two effective ways to move data when it’s a virtual workload but what happens if it’s not, how do we move the information. The answer usually involves many meetings and POCs before the job is undertaken, which adds weeks or months to the task and by that time the goal posts may have moved. Surely there is an easier way to do this and one that doesn’t require me dragging it up to the server only to write it somewhere else.

Now let’s throw into the mix the ability to tier the data i.e. boost or constrict performance, and just when you think you have all that covered your CTO drafts a directive that we need to be “utilising the cloud more” after he read a recent Computerworld Forecast study showed that the highest rated, single most important project that  IT depts. are working on right now was Cloud; with KPMG cloud survey report also showing that 49% are using cloud to transform their business to drive cost efficiencies and he doesn’t want your business left behind. What do you do?

There are a couple of methods you can follow to help resolve all of this. One way is to go to your current supplier and ask for them to ship you several of their current boxes across their portfolio and pray to the data gods that it sates the business’ appetite until the next Megalomaniac of an application appears. Just as you wouldn’t let your insurance policy roll onto a new contract without checking what the best available deal is you wouldn’t do it with IT.  So another way is to research, evaluate, (and one I know I’m guilty of) procrastinate, but are you getting the job done? Or are you setting on the fence await a solution to magically present itself?

Today’s application builders and managers are seen by the business as saviours of the world with a license to kill, and they are being funded by the business to do so, with a conservative figure of 28% of businesses IT spend not being controlled by IT departments these LOBs are being armed to take the fight to go up against the “Ernst Stavro Blofeld” in their sector. So how can you help?

What if your IT investment made you more agile, by incorporating measurable functionality like storage efficiencies high availability and non-disruptive operations all the while giving the flexibility to craft, create and customise a hybrid cloud on your own terms as you envision it for now and future demands on the infrastructure?

What if this solution had best of breed data protection fully integrated in to the robust portfolio that not only reduced risk but cut the costs associated and increased reaction times to meet the ever demanding SLAs being stipulated by businesses?

What if this solution could incorporate all flash solutions and its benefits for key applications yet avoid the SAN Island in the data lake scenario, and actually be seamlessly amalgamated into the current infrastructure?

What if all of this had a measurable ROI that delivered in months not years?

This isn’t some DB10 prototype, with some “aftermarket” upgrades, this is available today and out of the box from NetApp.

And that box can be a virtual one as well. And when I say virtual I don’t just mean an OnPrem hypervisor, you can also get this from your Hyperscaler’s market place and deploy in minutes! And when you consider the figures from State of the Cloud Rightscale 2015 reported that 55% of Enterprises are looking at a hybrid cloud strategy and a further 10% at a single public strategy this has to be the tool of choice. Not to mention with the next version of Cloud ONTAP providing encryption, you know your data is protected.

And now here’s the clever bit – we’re not talking about creating an archipelago within an uncontrollable sea of data with a plethora of management tools. We are talking about a SINGLE OS with SINGLE management, to the point that we can now drag and drop a relationship from flash to disk to cloud. We are talking about seem less hybrid cloud architecture while remaining totally in control of your data.  This is a differentiated approach to the hybrid cloud. This is the data fabric.

Cloud may not be right for everybody and to borrow a quote from Dave Hitz when he is questioned “IS the cloud right for MY business?” he will reply with “I. Don’t. Know.” Because every business and their needs are different. But having that ability in the bag for when you may have a need or want for it is surely going to help you sleep a bit easier at night, knowing a deployment will take you minutes not months if someone comes knocking.

So now you have all the bases covered.  Before you know it line of business managers will be coming to you with a complex data management problems and before they know it you are providing them with whatever they want from an exploding key-fob to wrist-mounted dart guns to a jetpack. Just make sure they “do bring it back in one piece!”