Certification – More than just a tick in the box

Roughly six weeks ago I received an invitation to participate in the item development workshop to update the NetApp Certified Implementation Engineer Data Protection Specialist exam. The premise was to take the exam from its current state and bring it up to date as in the last 2 years a lot has changed within NetApp ONTAP and its associated protection methodologies. To get a good idea of how much has changed simply look at the ONTAP 9.1 release notes, even in the data protection section it talks about NVE, RAID-TEC, SnapLock just to mention a few, so it was a honour to be invited to help update the exam and something I was looking forward to.

Sign greeting my arrival at RTP building one

Just over a week ago was the time to undertake this workshop and on a lovely sunny Sunday morning I boarded a plane to head to NetApp’s Research Triangle Park (RTP), North Carolina USA campus where the workshop was to be held. The following morning at 9am sharp we started the week long activity of modernising the exam to the most recent GA release of ONTAP. The workshop was run by an independent company who specialise in writing exams for certification. Their job was to lead the workshop making sure we kept to the exam blueprint, kept it to the right level of difficulty and asked the question as directly as possible. One of the first things we covered was the difference between assessment and certification and for those of you who may be unaware the difference is probably two fold. For assessment all the information required to pass is contained with a structured course material (e.g. PowerPoint; pdf course notes) whilst a certification draws on many various pieces of information ranging from course notes to technical reports to documentation and even industrial knowledge. The other main difference is that a certification needs to be able to stand up to any legal challenges to the content within. So with that we got down to work.

With all the changes going on with the portfolio and even within ONTAP it was great to get together with 9 other individuals who also shared a desire to update this exam and see not only how they viewed ONTAP had changed over the last two years but also the use cases and deployment models being adopted for the technology. Over the next few days we reviewed the current question pool and then set to work writing new ones. These were then assessed as a group to see if they were relevant and of the right difficultly, to name just two of the measurements we judged each question on. It was also good to see that the questions proposed for the exam were honest and fair with no intent of trying to trick candidates.

exam1

It was both a long and rewarding week where I’m sure all in attendance learned something new. It also shone a light on the amount of work and effort that is put into constructing a certification exam by NetApp, as they understand the benefit to the candidate in undertaking the hard work preparing for, taking the exam and also the badge of honour received for the certification. I have often felt that by obtaining a certification it shows that you have a desire to know and understand the technology in question and also that you have taken the time to learn its best practices. They can help differentiate you within your company or when you apply for a new role. Just to make sure I’m up to date with what is going on I usually take an exam during NetApp Insight for my own personal benefit mainly but it helps reinforce the value I have to offer to the team when I am back at my day job.

Before we knew it Friday afternoon had rolled around and we had completed all the tasks required from the workshop, which meant that in a few weeks from now the update exam NS0-512 will go live. A big thank you goes out to NetApp Certification Program for inviting me to the workshop and also to the nine other individuals and the independent test company I had the pleasure of working with for the week. I left ready to talk SnapMirror and MetroCluster with anyone who wanted to listen. So if you get the opportunity to help write or update an exam I would highly recommend it, and before you start to contact me for help the answer is “yes it’s on the exam.”

NOTE: For more information listen to the upcoming Tech ONTAP Podcast Episode 78 NetApp Certifications NCIE featuring the NetApp A-Team

25 hours at CLEUR

Last week I was lucky enough to get a chance to attend Cisco Live in Berlin. I have been at this venue before but this is my first Cisco event and I have to say I was impressed. Hosted at the Berlin Messe it didn’t feel overly crowded yet with over 12,000 people involved in the conference it barely used up a third of the 26 halls available for events there. My reason for attending was a FlexPod round-table to be hosted jointly by people from NetApp and Cisco. I was in attendance as the voice of Arrow ECS Europe and, as the UK distributor involved in the most FlexPods, I thought it was important not only to give my feedback at this event but also to hear the messaging coming out directly from the vendors and pass this back to our reseller partners in the UK and also back to Arrow ECS.

file-28-02-2017-16-31-29

Sadly no AAA

I attended the conference in both a virtual and physical capacity; virtually reviewing the content available from keynotes and also by being there on an Explorer pass. This basically got me to everything but breakout sessions. Being immersed in the Cisco community was a refreshing experience and one I would recommend. Even without attending sessions there is a huge amount of information available to gather, not just from Cisco but also from some of their strategic partners including and not limited to Veeam, F5 and Citrix.

At the round-table it was great to hear was the rate of growth from a FlexPod perspective. A partnership just over five years old and it’s an over $7 billion business, and the number 1 integrated infrastructure. It also great to see that they are not resting on their laurels with a new CVD released that week covering how to deploy FlexPod Datacenter with Docker Datacenter for Container Management and with more in the pipeline narrowing the gap between private and hybrid clouds I would have to say that this is a partnership with plenty left in the tank.

file-28-02-2017-16-36-07

Ready for the DevOps community

I swung by the NetApp stand after and heard about another exciting FlexPod project the All Flash 3D FlexPod, and for anyone who attended the UK partner academy last June might recall a presentation on an older version of this project. We often talk about FlexPod being more than a sum of all the constituent parts and this is one case where this statement truly shines. Being used in anything from the medical profession to 4k content creation to geological applications this is a true monster and I doubt we have really scratched the surface of the areas this solution could be applicable I would suggest checking out here for more information.

It was also great to see that a MetroCluster FlexPod was running the event, by swinging by the NOC you could see the statistics in real time like the 20GB (yes Gigabytes) of internet traffic flowing around the campus supported by 968 access points (they added 75 on Tue night to improve the experience) yet with everything going on the AFF8060 never really was taxed as seen on the Graphana dashboard.

file-28-02-2017-16-37-17

Monitoring one half of the NOC AFF FlexPod

What did hit me whilst wandering around the many halls was the vast plethora of Cisco products and how this company has evolved. I knew some of different areas they but there’s so much more than the routing and switching that the business was born out of. I talked to many interesting people covering varying business units from IOT to Digital Finance Services to business transformation. If I had had more time there there’s so much more I would have like to have done, like sat down for a few hours and run through a self-passed lab, attend one of the many of what looked like sessions in the DevOps area or even give 20 minutes of my time to their charitable cause to Rise Against Hunger. One thing that hit me was this was a company with employees understanding the companies vision, it’s like an 8 person coxed crew perfectly in time with one another lifting their stroke rate above 32 still creating perfect puddles and yet not breaking a sweat. The slogan for the event was Your Time Is Now and I have to say that we are definitely in a Cisco Era.

NetApp plus Veeam

NetApp and Veeam have just announced a joint business special offer for EMEA leveraging key aspects of each other’s portfolio. I’ve been a keen supporter of Veeam technology ever since they made people sit up and take note after winning the VMware 2010 Best in Show award, and I’m pleased to see it’s still turning heads today.

Determined not to just follow the masses and their back up applications; Veeam approach the important aspect of data protection with a refreshing outlook, delivering the aptly named Veeam Availability Suite. Providing a recovery time and point objective (RTPO) of less than 15mins for all applications and data that should satisfy any business that requires 24 by 7 operations.

5 key Capabilities of the Veeam Availability Suite

And with the inclusion in their 9.5 release Veeam have added agents for Windows and Linux, an Availability Console, an Availability Orchestrator, as well as the vast number of enhancements made to the Explorers this is a product that just keeps getting better.

The E2812 from NetApp is a SAN controller with a long heritage – Acquired from LSI in March 2011 the Engenio product line running the SANtricity operating system has over 1 million units sold; and this latest release in the family by NetApp, was announced in Sept 2016 and started shipping shortly after. Supporting a new version, SANtricity v11.30, this 2U array provides connectivity to the LUNS hosted on 12 internal NL-SAS via FC, iSCSI, SAS and can grow to 180 drives or 1800TB. With over 6 nines of availability and Data Assurance support (T10 PI) the E2800 series is designed for the small to medium sized businesses seeking new ways to manage data growth over a range of mixed traditional workloads and third platform.

NetApp E2800

The combination of Veeam and the NetApp E-Series gives you an easy to use availability solution with the perfect staging area for backups. With a simple, fast and scalable storage architecture and a modern disaster recovery solution for your vSphere and Hyper-V environments this partnership will help you confidently meet today’s always on enterprise service level objectives.

With more information expected on the joint promotion of this already popular pairing of products due out over the coming days (including T&Cs); this is a special offer that is going to attract a lot of attention between now and the summer months, so please speak to your channel/ account manager for more information on how to purchase Veeam Availability Suite or Veeam Backup & Replication Enterprise or Enterprise Plus editions, together with NetApp E-Series, and get up to 10% off Veeam.

(For more information on the benefits of using Veeam and NetApp E-series please see: https://www.veeam.com/blog/tips-and-tricks-on-leveraging-netapp-e-series-arrays-in-veeam-availability-suite-designs.html )

The A700s Killing it in the Storage Market

On the 31st of Jan NetApp released a new box the A700s and it’s a game changer in more ways than you might know. On the same day they also released its SPC-1 benchmark results to show how pearly white its teeth are.

Hardware

Let’s start with the physical, this All Flash Array is a 4U chassis containing controllers and storage. NetApp has been producing integrated disk and controller systems for many years now but these have always been aimed at the SMB market. The A700s is the first designed with the enterprise market in mind and boy what a first. Housing 24 internal SSDs and expandability up to 216 per HA pair, with the ability to scale out to 12 SAN nodes or 24 NAS. This gives you some serious expandability options.

Each controller has four on-board 40Gbe ports with four PCIe slots for a vast array of additional functionality including more 40Gbe, maybe 32Gb FC, or even expanding to my personal favourite a MetroCluster.

Operating System

As expected this system is running the world’s number one branded operating system, ONTAP which provides a unified SAN and NAS architecture to meet the most demanding workloads. Consisting of anything from consolidated virtualisation, enterprise applications to design and engineering workloads, all delivered from a truly scale out architecture, ONTAP can manage up to 24 nodes as a single system with 13 PB of all flash storage.

SPC-1

The SPC-1 benchmark to quote their website – “consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations.”

This benchmark is a way to allow customers to compare various storage vendors. NetApp previously under took this benchmark back in April 2015 with the FAS8080AE and by way of comparison to how things have changed in just under two years I have put some of the more relevant results into a table. (For more detail see here and check out exec summary)

spc1table

SPC-1 Highlights

As you can see the FAS8080AE; which was before NetApp were required to adopt Gartner’s naming scheme for it to qualify as an All Flash Array; performed admirably giving us just over 685,000 IOPS at 1.2 milliseconds. This 8 node cluster was placed fifth in the top ten table but by the end of 2016 it had slowly been pushed out to 8th which is still very impressive. The most recent results published Jan 2017 show that the A700S delivered 2.4 million IOPS at roughly 0.7ms in a 12 node cluster. This huge number not only shows us the improvements in hardware that have occurred over the last couple of years but also the advances that NetApp has made with its ONTAP operating system. Even if you don’t need to scale out to 12 nodes a HA pair can deliver over 400,000 IOPS in under 0.3ms and when you consider you can stack it with 15.3TB SSDs giving you an effective capacity of a petabyte in a 4U enclosure delivering 650,000 (IOPS) in under a millisecond!

So what do these numbers actually mean? NetApp have reduced the physical capacity down from 2 full racks (84RU) to just over half a rack (26RU) whilst upping the node count by 50%. Yet by doing this they have greatly increased the throughput. So they’ve quartered your rack space requirements halved the latency and quadrupled the IOPS and this isn’t the box pushed to its max.

What’s in a name?

The S in the name has been said to represent slim but some have said it could well stand for sexy (and what a beast it is), or sport, but unlike a comparison made to hot hatchbacks, I would say this model is more the Aston Martin DB11 Intrepid Sport in Cinnabar Orange. This V12 monster is turning heads everywhere it goes, people are noticing her before they even get the chance to set eyes on her. Dropping into 3rd place in the SPC-1 is no mean feat but to do this with kit that occupies roughly half a rack is phenomenal!

The A700s is not trying to sneak around the corner no this AFA has all the capabilities we’ve come to love from ONTAP whether that be the data efficiencies with dedupe and compaction, SnapMirror, FlexClone, per Volume Encryption, Secure Multi tenancy, can form part of your Data Fabric solution, the list goes on. Remember this is an OS that keeps going from strength to strength as we can see from the addition of FlexGroup providing massively scalable next generation data containers (for more information see @NFSDudeAbides post here ) and this is hardware that marries the advances in technology beautifully.

Conclusion

All I can say is that if you are in the storage market this will have made you sit up and take note, and if you’re one of the many flash start-ups this has probably got you scared. No matter how you slice it this box delivers in all respects and is a deadly addition to any environment; just like Old MacHeath’s jack-knife.

Rise of the NGA

In a previous blog, I talked about predictable performance and how it can have a huge impact on your any business. In this blog, I’ll go into detail on another aspect of predictability within the SolidFire array.

But before we start, I’d like to address how we refer to SolidFire as a product. I’m not happy using the term array. SolidFire is so much more than a storage array, in the same way that a Ferrari GT250 California is so much more than a car. As it’s designed for the next generation data centre, I think we should be referring it to as a “next generation array” or NGA.


So let’s start by taking a look at the predictability of this NGA in terms with how it deals with failures.

One of the many reasons SolidFire does so well in the service provider space is it can deal with a wide range of possible failure scenarios. It’s so good at this, in fact, that we even refer to it as a “self-healing” system. This means SolidFire can cope with disk and node failures, hardware upgrades and replacements, and software upgrades ALL without downtime. The loss of a disk/controller/shelf initiates a fully automatic self-healing process—which, by the way, does not affect gQOS at all (for an explanation on gQOS, see my previous post).

For those of you who may be new to the technology, SolidFire as a Tier 0 storage technology does not use RAID (redundant array of independent disks) protection. Instead it uses something that is referred to as the SolidFire Helix.

Usually deployed in a “double Helix,” this core part of the operating system provides cluster-wide RAID-less data protection while avoiding single points of failure. If a failure does occur, it “self-heals” the whole cluster and restores redundancy. What does that translate to in terms of exposure? Try less than 10 minutes for a drive failure and less than 1 hour for a node failure! Now that’s next generation.

Another distinguishing feature of SolidFire is their proactive philosophy when it comes to support. Available to every customer, Active Support is divided into three key services:

•    24/7 support with immediate access to a level 3 support engineer

•    Secure Assist: remote assistance over a secure connection

•    Active IQ: real-time telemetric data and trending analysis SaaS


Active IQ is the main attraction in the Active Support toolset. It allows you to visualise problems and challenge areas at varying levels of granularity, giving you the ability to better anticipate their outcome and undertake any proactive measures. Allowing you to model for “what if…” scenarios and accurately envisage how to maximise your investment, Active IQ receives telemetric data in 10 second intervals, and allows you to perform performance modelling and historic trending with ease. You can also enable real-time, customisable alerts for what YOU want to know about. Just think of the icons on the above graphic as blades on a Swiss army knife and you get to personalise the multi-tool.

Not only can the NGA guard against data corruption in the case of hardware failures and protect during planned downtime and upgrades, it can balance its workload around the cluster it can help plan the future and reduce the risk of exposure during an outage and automatically regain redundancy to provide data availability without impacting performance.

So when you look at it the SolidFire NGA is more predictable than Nostradamus watching a Roland Emmerich film whilst listening to a metronome waiting for the sun to rise. And it already knows the answer.

Getting to grips with SolidFire

We’ve had Nike MAGs, Pepsi Max and Hover boards now we look to the data centre of the future

I have been doing more and more with SolidFire over the last few months, and I’ve had somewhat of a revelation about it. Around this time last year, I thought there was too much overlap with the FAS wing of the portfolio for NetApp to be pursuing an acquisition. To the uninformed, this may look true on paper, but it is completely different in practice. The more I learn about SolidFire, the more I am impressed by the decisions NetApp has made and the direction they are heading.

Hopefully you are aware of all the great benefits of using a SolidFire cluster within your environment, but for those of you who aren’t, I’ll sum it up in one word—predictable. This predictability extends to all features of the architecture including capacity, performance, overall health and healing, and scalability.

An initial 4 node SolidFire deployment

Let’s have a look at performance first. Starting with four nodes, you have 200K IOPS available. By adding more nodes to this cluster, you can grow predictably at 50k per node*. And that’s not even the best part. The real showstopper is SolidFire’s ability to provide you with precisely the IOPS your workload requires by assigning a policy to each volume you create. If you undertake this task via the GUI, it’s a set of three boxes that sit in the bottom half of the creation wizard asking you what your minimum, maximum, and burst requirements for this volume are. These three little text boxes are unobtrusive and easy to overlook, but they have a huge impact on what happens within your environment. By setting the minimum field, you are effectively guaranteeing the quality of service that volume gets. Think about it, “guaranteed QOS,” (gQOS, if you like). That little g added to an acronym we have used for years is a small appendage with massive importance.

Volume Creation wizard

Most other vendors in the IT industry will say that the use of QOS is merely a Band-Aid — a reactive measure—until you can fix the issue that has caused a workload to be starved or bullied. This requires you to carry out some manual intervention, not to mention the repercussions of you letting things escalate to that point where they have already had a negative impact on the business.

We need to change from this reactive methodology. Let’s start by lifting the term “quality of service” out of its drab connotations, give it a coiffured beard, skinny jeans, and a double macchiato. Let’s add a “g” to this aging acronym and turn that hipster loose on the world. gQOS is the millennial in the workplace, delivering a twenty-first-century impact on the tasks and procedures that have been stuck in a rut for years. When you hear someone use QOS ask, “Don’t you mean gQOS?” Then walk away in disgust when they look at you blankly.

With SolidFire you are able to allocate performance independent of capacity in real-time without impacting other workloads. What does this mean you may ask? No more noisy neighbours influencing the rest of the system. gQOS addresses the issue of shared resources and allows you to provide fool-proof SLAs back to the business something sought by those Enterprise organisations looking to undergo a transformational change and Service Providers with hundreds of customers on a single shared platform.

gQOS in action

So let’s start positively promoting gQOS because if it’s not guaranteed can we really call it quality? If I was in the tagline-writing business, this area of the NetApp portfolio would read something like “SolidFire Predictability Guaranteed.”

*The SF19210 adds 100K per node.

Grays Sports Almanac image courtesy of Firebox.com

Painting a Vanilla Sky

Expanding the NetApp Hybrid Cloud

During the first general session at NetApp Insight 2016 in Las Vegas, George Kurian, CEO (and a fascinating person to listen to), stated that “NetApp are the fastest growing SAN vendor and are also the fastest growing all-flash array vendor.” This is superb news for any hardware company, but for NetApp, this isn’t enough. He is currently leading the company’s transformation into one that serves you, the customer, in this new era of IT while addressing how you want to buy and consume IT. NetApp are addressing this with the Data Fabric.

If you need a better understanding of the Data Fabric, I would strongly suggest you look at this great two-part post from @TechStringy (part 1 here and part 2 here).

Back in 2001, Cameron Crowe released a film starring Tom Cruise called “Vanilla Sky.” In this, the main protagonist suffers a series of unfortunate events and rather than face up to them, he decides to have himself put in stasis until those problems could be resolved. Well, if managing data within varying cloud scenarios was his problem, then announcements made by NetApp earlier this week would mean he could be brought back and stop avoiding the issues. So let’s take a look at some of what was announced:

NetApp Cloud Sync: This is a service offering that moves and continuously syncs data between on-prem and S3 cloud storage. For those of you who attended this year’s Insight in Las Vegas, this was the intriguing demo given by Joe CaraDonna illustrating how NASA is interacting with the Mars Rover Curiosity. Joe showed how information flows back to Earth via “JPL … the hub of mankind’s only intergalactic network,” all in an automated, validated, and predictably-secure manner and how they can realise great value from that data. Cloud Sync not only allows you to move huge amounts of data quickly into the cloud, but it also gives you the ability to utilise the elastic compute of AWS, which is great if you are looking to carry out some CPU-intensive workloads like Map Reduce. If you are interested in what you have read or seen so far, head over to the here where you can now and take advantage of the 30-day free trial.

Data Fabric Solution for Cloud Backup (ONTAP to AltaVault to Cloud): For those of you who saw the presentation at Insight 2015, this is the backing up of FAS via AltaVault using Snap Center. This interaction of portfolio items gives us the ability to provide end-to-end backups of NAS data while enabling single-file restores via the snapshot catalogue function. This service has a tonne of built-in policies to choose from—simply drag and drop items to get it configured. AltaVault also now has the ability to help with seeding of your backup via the use of an AWS Snowball device (or up to ten daisy-chained together as a single seeding target) it’s never been easier to get your data into and manage in the cloud.

NetApp Cloud Control for Microsoft Office 365: This tool extends data protection, security, and compliance to your Office 365 environment to protect you from cyber-attacks and breaches in the cloud. It allows you to back up your Exchange SharePoint and OneDrive for business and vault a copy to another location, which could be an on-prem, nearby, or cloud environment, depending on your disaster recovery and business continuity policies. This is a great extension of the Data Fabric message, as we can now utilise FAS, and or ONTAP Cloud, and or AltaVault, and StorageGRID as backup targets for production environments running wherever you deem appropriate for that point in time.

NetApp Private Storage for Cloud: For customers that are after an OPEX model and see the previous NetApp Private Storage route as an inhibitor to this (due to the fact that they need to source everything themselves), this is where NPS-as-a-Service comes into its own. It gives customers the ability to approach a single source and acquire what they need to provide an NPS resource back to their company. A solution offering for NPS for Cloud is currently offered by Arrow ECS in the U.S. and is coming to Europe soon. This offering helps you create a mesh between storage systems and various clouds, giving you the ability to control where your data resides while providing the level of performance you want to the cloud compute of your choice.

ONTAP Cloud for Microsoft Azure: This is the second software-only data management IaaS offering for hyper-scalers being added to the NetApp portfolio. ONTAP Cloud gives customers the ability to apply all that lovely data management functionality that has drawn people to NetApp FAS for years layered on top of blob storage from your cloud provider. You get the great storage efficiencies and multi-protocol support with the ease of “drag and drop,” and you can manage replication to and from this software-defined storage appliance with the ability to encrypt the data whilst it resides in the cloud. This service has a variety of use cases, from providing software development or production with storage controls to utilizing it as a disaster recovery entity.

So if we are to look at an overview of the data Fabric now we can see that ability to move data around dependant on business requirements.

During his presentation at Insight 2016 George Kurian also said, “Every one of NetApp’s competitors is constructing the next data silo, or prison, from which data cannot escape.” Hopefully by Implementing the Data Fabric NetApp customers can build with confidence the IT business model which facilities a flow of information within their organisation so that can grow and adapt to meet their ever-changing IT needs.

The Data Fabric is the data management architecture for the next era of IT, and NetApp intend to lead that era. With this recent enhancement of the Data Fabric and NetApp’s portfolio, there is no more need to be shouting “Tech Support!” Instead, we can all be Monet and paint a beautiful Vanilla Sky.

Hindsight from Insight

NetApp Insight Las Vegas 2016 Roundup

I was lucky enough to get to go to Las Vegas with the NetApp A-Team and attend the NetApp Insight Americas and APAC conference. I have attended Insight EMEA many times, but this was my first time attending it on US soil

I would be remiss if I did not mention that both the Vegas and Berlin events have the same number of high-quality breakout sessions. As expected, the majority of the sessions that were offered in Vegas are re-offered in Berlin. The organisation of the conference is the same, with things like Insight Central consisting of NetApp partners and vendor showcases. From that standpoint, it felt like I could very well have been at the EMEA conference. There is also a high number of NetApp technical employees on hand to debate different deployment methodologies which is a great reason in its self to attend.

However, Vegas did seem a lot more relaxed, and with over twice as many attendees there, it somehow felt quieter due to the size of the conference centre. There’s also a lot more going on in the evenings, (even just within the Mandalay Bay Hotel, never mind the rest of Vegas) with lots of opportunities for delegates to mingle and converse.

At this year’s conference, NetApp announced 16 new products! This is a huge amount for any company, and I think it just goes to show how NetApp are trying to stay at the leading edge of the storage industry. There were disk shelves and controllers announced, and if you would like to know more about the new controllers, see my previous post here. There was also an update to ONTAP Select as well as the arrival of ONTAP Cloud for Azure, all made possible by the release of ONTAP 9.1. There was a lot of messaging in both the general sessions and in the breakouts geared towards DevOps and this new way of deploying applications either on premises or in the cloud.

This year we also had the joy of SolidFire joining in, and with a raft of sessions available, this technology did prove popular. The two-hour deep dive by Andy Roberts was the third-most attended session of the conference, and the SolidFire Hands-On Lab was the third-most requested. They also announced the integration of SolidFire into FlexPod, which my A-Team colleague Melissa Wright (@vmiss33) coined the “DevOps workhorse.” It is a perfect tag line, and one I am going to start to use.

NetApp Insight also gives you the opportunity to take NetApp certification exams, so I thought I should try some. I passed two exams whilst there: the updated FlexPod design NS0-0170 and the new Hybrid Cloud NS0-0146, which give me the NCSA accreditation. These came with some lovely luggage tags courtesy of Liz Burns from NetApp University, to  add to the certificates which I already held. This is a great way to provide value back to your employer for attending if you need a stronger reason to attend. It’s best to book your exam before you get there as it can be very busy, and you may have to wait around for a while for a walk-in appointment.

A nice colourful collection

If you are new to SolidFire and want to understand how it’s managed, the two-hour deep dive mentioned earlier is a great place to start. It’s a great mix of slideware and demonstration on how to configure various key features of the Element OS. I would also recommend Val Bercovici’s (@valb00) “Why DevOps will move to the ‘lean’ cloud” break out. This session will help you understand the shift in application development and what you can do to try and keep pace and remain relevant.

NetApp now seem to be pivoting towards messaging that helps the developer and the DevOps team, providing products and tools that will integrate into their style of working as we have seen over the years with NetApp. Below is the link to the scenario covered in the General session on the third day. I think it provides good insight into how the pace of application development is changing, the tools that this new breed of developer is adopting and using, and also how NetApp is taking this methodology seriously (as evidenced by the fact that they have a site with a host of tools and scripts aimed purely at DevOps). Also embedded in the picture below is the link to the scenario acted out on stage during the general session.

I would also recommend looking into the sessions on VMware’s vVols functionality. They’re a great primer on this area of VMware’s evolving portfolio and they also show how NetApp can utilise this ever-improving technology. Andy Banta (@andybanta, who wrote an insightful blog on the topic and appeared on GreyBeards on Storage Ep. 36) and Josh (‘the intern’) Atwell (@Josh_Atwell) gave a joint session on how SolidFire differs from conventional storage arrays in their implementation and how best to utilise policy-based storage with SolidFire. Then there was Andreas Engel from NetApp and Pete Flecha (@vPedroArrow) from VMware who provided a deploy, implement, and troubleshoot session which was almost as popular as Pete’s session at VMworld. It illustrated some handy tips, tricks, and gotchas that a lot of the audience then took with them as they headed to the Hands-on Labs to get up to speed with vVols.  I would also keep an eye out for the Inform and Delight sessions, including a great one by Veeam on “Closing the Door on the Data Center Availability Gap.” And let’s not forget the “Dave and Dave show,” which is a must-see attraction.

Also in Vegas this year attending NetApp Insight for the first time was vBrown bag. Their online presence has been helping IT professionals become more proficient with virtualisation for the past six years and is a must point of call for anyone chasing a VCP or other certification, due to the wealth of knowledge on their site. They were there to expand their ever-increasing field of topics, and one of the presentations recorded was Sam Moulton (@SamMoulton), Champion of the NetApp A-Team(@NetAppATeam) with A-Team member Trey Davis (@ntap_seal), Senior Consultant from iVision in Atlanta providing some insight into the NetApp A-Team and what we do. This short discussion (embedded within picture) will hopefully help people understand the team better and where we fit within the ecosystem.

For more information on the A-Team’s presence in Las Vegas this year, check out the session called “Birds of a Feather: Walk the line with the A-Team” which is hopefully on the site for review. There will be a strong presence in Berlin, so come up and talk to us or send us a tweet.

One of the highlights during the opening of the third general session was the reel put together from the carpool Karaoke. I would urge you to have a look and a laugh.

This was a great conference, with a phenomenal amount of superb content, too much to take on board in the four days but I will enjoy reviewing over the next few weeks. I am thankful to my employer for letting me attend and I now feel invigorated and more confident to go out and have discussions and point out why customers should be looking at NetApp for their cloud, hybrid, and on-premises storage needs. If you are heading to Berlin, then I will hopefully see you there.


The NetApp, They Are A-Changin’

 

A lot of people criticize NetApp for not moving with the times. Some of the newer start-ups like to claim that NetApp is a legacy company not in touch with today’s marketplace. Yet we all know the company has a rich and deep heritage which spans nearly a quarter of a century with over 20 of those years spent on the NASDAQ; so they must be doing something right.

They also like to say NetApp are not in touch with today’s data centre requirements. I would question that. Today NetApp launches the start of a whole new line for the FAS and All Flash FAS side of the portfolio. They have announced three new FAS models: the FAS2600, the FAS8200, and the FAS9000. And on the all-flash side, another two new models. These systems are designed with the data centre of the future in mind, and these enterprise products again deliver an industry first (NetApp were the first to support 15.3TB SSD drives), with next-generation networking in the form of 40Gbe and 32GB FC.

The FAS9000 is the new flagship of the line, and introduces a new modular design similar to what we have seen Cisco adopt to great success in the UCS line. This system has 10 PCI slots per controller which, when combined with the ability of either of the next-gen networking previously mentioned, gives HUGE amounts of bandwidth to either flash and NL-SAS drives. It also has a dedicated slot for NVMe SSD to help with read caching (aka Flash Cache) for those workloads that benefit from a read boost, and has the ability to swap out the NVRAM and controller modules separately, which is to allow for expansion upgrades in the years to come. Here are some of the numbers associated with the FAS9000: it can scale up to 14 PB (Petabytes) per high availability pair (HA pair) or up to 172PB for a 24 node (12 HA pairs) in a NAS environment. Yes, that’s up to 172PB of flash storage managed as a single entity!!

They also announced the arrival of the FAS8200, the new workhorse for enterprise workloads, delivering six 9s or greater of availability. It carries 256GB of RAM—that’s equivalent to today’s FAS8080, or 4x of what’s found in a FSA8040—with 1TB of NVMe M.2 Flash Cache as standard (which frees up a PCIe slot) and can scale to 48TB of flash per HA pair when combined with Flash Pool technology. The FAS8200 also has 4x UTA2 and 2x 10T ports on board. This system is ready to go, and if you need to add 40Gbe or 32Gb FC, this chassis will support the addition of those via cards. This 3U chassis will support up to 4.8PB and can scale out to 57PB, meeting any multi-protocol or multi-application workload requirements.

Another new member to the FAS family is the FAS2600, which replaces the ever popular FAS2500 series. For this market space, disk and controllers contained within the same chassis are prevalent, and the trend that started with the original FAS2000 (maybe even the good ole StoreVault) is still here today, with the FAS2600 offering similar options as the FAS2500 but now with SAS3 support. We have the FAS2620, which supports large form factor drives, whilst the FAS2650 supports the smaller variants. Something that is new to the FAS2000 series is the inclusion of Flash Cache, and the FAS2600 has received the gift of NVMe with 1TB standard per HA pair. Also, changes to the networking have been made. No longer do we have dedicated Gbe ports. Instead, they have change them to 10Gbe, which are for cluster interconnects, scaling up to 8 nodes in this range, and can now use all 4 UTA2 ports for data connectivity. And if you still require 1Gbe, it can be achieved via SFPs for these UTA2 ports (X6567-R6 for optical and X6568-R6 for RJ45).

NetApp, a company that, for some, may not be known for its flash portfolio, yet has sold north of 575PB of the stuff, have also announced two new controllers for the All-Flash Array (AFA) space; the A300 and the A700. These systems are designed purely for flash media, and it shows with the A300 supporting 256GB of RAM whilst the A700 runs with a terabyte of RAM (1024GB)! This huge jump will allow for a lot more processing from the 40Gb and 32Gb networks whilst still delivering microsecond response times. For this ultra-low latency, we are looking at either products like the Brocade X6 director for FC or Cisco’s 3132Q-V for Ethernet to meet these ever-increasing demands.

These new systems will support the world’s number one storage OS: ONTAP version 9.1 and beyond, with this new release also announced today. ONTAP 9.1 in itself has some improvements over the previous versions. We have seen some major boosts to performance, especially in the SME space with the FAS2600 gaining a 200% performance improvement over the previous generation with the FAS8200, and with the FAS9000, about 50% better than their predecessor. The new stellar performer in AFA space is the A700. This new AFA has been reported to handle practically double the workload of an AFF8080 running an Oracle database which is another huge leap in performance.

There are a couple of other nice new features in ONTAP 9.1 which I will mention here, but won’t go into too much detail on. The first would be FlexGroups, which is a single namespace spanning multiple controllers scaling all the way to 20PB or 400 billion files (think infinite volumes but done a lot better). Then there’s cloud tiering: the ability of an AFA to utilise an S3 object store for its cold data—now that’s H. O. T. HOT! ONTAP 9.1 also brings us volume-level encryption, which will work with any type of drive and only encrypt the data that needs it. The Data Fabric also gets an upgrade, with the inclusion of ONTAP Cloud for Azure, which has been a while behind the cloud version for AWS but is worth the wait. And finally we also get the ability with the Enterprise products running ONTAP 9.1 to scale to 12 nodes within a single SAN cluster(that’s the ability to add another 4 nodes).

On another note, NetApp did launch another new box just a couple of weeks ago; the new E2800 sporting the SANtricity OS 8.30, also available in AFA variants and delivering over 300,000 IOPS in a box designed for small and mid-sized businesses. Which like the SolidFire side of the portfolio should not be over looked if it meets all of your desired requirements.

So come gather round people, writers and critics alike. Take a good look. I think we can safely say, that NetApp is a keeping, itself in the game and delivering platforms that go beyond tomorrow’s requirements.

But the big question everyone wants to know is, “What does it look like?” For the answer to that, you should be at NetApp Insight!

ONTAP 9 A new flavour with plenty of features

 

Name change

NetApp recently announce the upcoming release of their flagship operating system for their FAS and AFF product lines. ONTAP 9 as you can glean from the name is the ninth iteration for this OS which like a fine wine keeps getting better with age. Some of you will also have noticed the simplification of the name, no more clustered, or data, just simply ONTAP. The reality is clustering is the standard way to deploy controllers which store data, so it’s not really necessary to repeat that in the name, a bit like Ikea telling you the things you can put inside a Kullen (or Hemnes or Trysil which are all improvements over the Hurdal). But the most important thing about this change is the numeral at the end, 9. This is the next new major release of the operating system providing all the features that were available in 7-mode but also so much more.

So now that we have got that out of the way let’s see what else
has changed….

New features

Let’s take a quick look at some of the new features; so grab a pen (or for the millennials your phone camera):

  • Firstly I think I should mention you can now get ONTAP in three different varieties dependant on your use case. The appliance based version, ONTAP; the Hyperscaler version, ONTAP Cloud; and the software only version ONTAP Select. This should allow for management of data where ever it exists.
  • SnapLock – Yes the feature that everybody wanted to know where it had gone when comparing cDOT with Data ONTAP 7-mode, yet less than 5% of systems worldwide used (according to ASUP) is back. WORM functionality to meet retention and compliance requirements.
  • Compaction – A storage efficiency technology that when
    combined with NetApp’s inline deduplication and compression allows you to fit even more into each storage block. More on this technology in a later post.
  • MetroCluster – Ability to scale out to up to 8 nodes. We can now have 1, 2 or 4 nodes per site as supported configurations. NetApp have also added the ability to have non mirrored aggregates on a MetroCluster.
  • On-board Key manager – which removes the need for an off box key manager system when encrypting data.
  • Windows Workgroups – Another feature making a return is the ability to setup a CIFS/SMB workgroup so now we don’t need an Active Directory infrastructure to carry out simple file sharing.
  • RAID –TEC – Triple Erasure Coding expanding on the protection provided by RAID –DP. Allowing us to add triple parity support to our RAID groups, this technology is going to be crucial as we expand to SATA drives in excess on 8TB and SSD beyond 16TB.
  • 15TB SSD support – Yes you read that right, NetApp are one of, if not the first, major storage vendor to bring you 15.3TB SSDs to market. We can utilise these with an AFF8080 giving you 1PB of guaranteed effective capacity in a 2U disk shelf!!! To continue that train of thought we could scale out to 367TB effective AFF capacity within a single cluster. This will radically change the way people think about and design datacentres of the future. By shrinking the required hardware footprint we in turn reduce the power and cooling requirements, lowering the overall OPEX for the datacentres of the future; this will lead to a hugely reduced timeframe for return on investment on this technology, which in turn will drive adoption.
  • AFF deployments – With ONTAP 9 NetApp are introducing the ability to rapidly deploy applications to use the storage within 10 minutes with one simple input screen and this wizard follows all the best practices for the selected application.

Upgrade concerns

One of the worries people previously had with regards to NetApp FAS systems was how to upgrade to a new version of the OS for your environment, especially if you had systems at both primary and DR.

Version independent SnapMirror which arrived with 8.3 is great if you have a complex system of bidirectional water-falling relationships as planning an upgrade prior to this
needed an A1 sized PERT chart to plan the event. Now NetApp allow for an automated rolling upgrade around a cluster, it should mean for those customers out there who have gone for a scale out approach to tackling their storage requirements (and I salute you on your choice) it’s the same steps if you have 2 or 24 controllers. Today you can undertake this with three commands for complete cluster upgrades which is such a slick process, heck you can even call up the API from within a PowerShell script.

How does it look?

If you look below I have a few screen shots showing some of the new interface including the new performance statics that OnCommand System Manager can now display.

Notice the new menu along the top. This helps to make moving around a lot easier.

Here we can see some of the performance figures for a cluster, as this is a sim I didn’t really drive too much IO at it, but it will be very useful once in production giving you insight to how your cluster is performing at 15 second intervals.

Another nice feature of the latest release is the search ability, which I think will come into its own in larger multi-protocol installations of several PB, helping hone in on the resource you are after quicker.

First impressions

For this article I am using a version in a lab environment and from its slick new graphical interface (see above) to the huge leaps made under the covers this OS keeps getting stronger. The GUI is fast to load even on a sim, the wizards methodical, the layout intuitive and once you start using this and have to jump back onto an 8.x version, as I did, you will appreciate the subtle differences and refinements that have gone into ONTAP 9.

Overall takeaways

With the advent of ONTAP 9 NetApp have also announce a 6 month cadence for future releases making it easier to plan for upgrades and improvements which is good news for those shops who like to stay at the forefront of technology. The inclusion of the features above and the advancements made under the cover should hopefully illustrate to you that NetApp is not a company who rests on their laurels but strives for innovation. The ability to keep adding more and more features yet making it simpler to manage, monitor and understand is a remarkable trait; and with this new major software release we get a great understanding of what this company hopes to achieve in the coming years.

This is also
an exciting upgrade for the Data fabric, as mentioned above ONTAP 9 is now available in 3 separate variants – engineered for FAS & AFF; ONTAP Select for Software Defined Storage currently running on top of vSphere or KVM; and ONTAP Cloud running in AWS and soon Azure. Businesses can now take even greater control of their data as they move to a bimodal method of IT deployment. As more and more people move to a hybrid multi-cloud model we will see people adopting these three options in varying amounts to provide the data management and functionality that they will require. As companies mix all three variations we get
what I like to call the Neapolitan Effect, which is probably one of the best of all ice-cream flavours; to their storage strategy, delivering the very best data storage and management wherever needed which is thanks to the ability of ONTAP to run simply anywhere.

So go out and download a copy today!