NetApp – Not Just Surviving but Thriving

When you’re a company that has been around for over 25 years, some people might look at you like a dinosaur, slowly plodding along as the end of the world as you know it approaches. A lot of press has been made in recent years that puts NetApp in this light. Some have said that NetApp has just been plodding along, not in touch with the industry or its customers’ needs.

Yet in the last 6 months, this “dinosaur” has started to show its teeth. The stock price has gone from $37.43 in September to a high of $71.41, and with the announcements made yesterday, you can expect that to go higher.

With the newly announced AFF A800, NetApp is now able to provide sub 200 µs latency for workloads that have the most demanding data needs. That’s an order of magnitude better than previous generations!

Not only is the AFF A800 blazingly fast, it can handle huge amounts of traffic with 25GB/s throughput on an HA pair and the ability to have NVMe end to end from the server to the storage via NVMe over FC. If using 32 or 16Gb FC isn’t a requirement you can use ethernet speeds of 100Gbe, another industry first made by NetApp. With 12 pairs clustered together, you are talking 300GB/s throughput on a single management domain. That should meet the most demanding environments.

With a current run rate of $2.0B for their all flash business, having already shipped over 20PB of NVME, and with a 44% Petabyte year on year growth, NetApp’s flash business is not only going to increase in size in the future, but with numbers like this it will survive any extinction event.

But the announcements made yesterday are not just about end-to-end NVMe-accelerated performance. There were also more advanced cloud integration messages.

NetApp’s cloud strategy is geared towards enabling customers to deliver business outcomes for all IT workloads in cloud, multi-cloud, and hybrid cloud environments. To do this, you must modernise your data management from the edge, to the core, and to the cloud.

Fabric Pool is just one of the features designed to help you do just that. Fabric Pool enables automatic tiering of cold data, which means you can purchase a smaller system or get an even higher amount of consolidation on a single box. With the release of ONTAP 9.4, Fabric Pool has been improved to allow Azure as a capacity tier and ONTAP Select as a performance tier. It can now also tier from the active primary data set, which is something I am looking forward to testing soon.

So when you look at these and other announcements that NetApp made yesterday, if they are a “dinosaur,” I would put them in the meat-eating Velociraptor camp. And that’s one dinosaur you do not want to take your eye off.

Advertisements

Setting up FabricPool

Recently, I was lucky enough to get the chance to spend a bit of time configuring FabricPool on a NetApp AFF A300. FabricPool is a feature that was introduced with ONTAP 9.2 that gives you the ability to utilise an S3 bucket as an extension of an all-flash aggregate. It is categorised as a storage tier, but it also has some interesting features. You can add a storage bucket from either AWS’s S3 service or from NetApp’s StorageGRID Webscale (SGWS) content repository. An aggregate can only be connected to one bucket at a time, but one bucket can serve multiple aggregates. Just remember that once an aggregate is attached to an S3 bucket it cannot be detached.

This functionality doesn’t just work across the whole of the aggregate—it is more granularly configured, drawing from the heritage of technologies like Flash Cache and Flash Pool. You assign a policy to each volume on how it utilises this new feature. A volume can have one of three policies: Snapshot-only, which is the default, allows cold data to be tiered off of the performance tier (flash) to the capacity tier (S3); None, where no data is tiered; or Backup, which transfers all the user data within a data protection volume to the bucket. Cold data is user data within the snapshot copy that hasn’t existed within the active file system for more than 48 hours. A volume can have its storage tier policy changed at any time when it exists within a FabricPool aggregate, and you can assign a policy to a volume that is being moved into a FabricPool aggregate (if you don’t want the default).

AFF systems come with a 10TB FabricPool license for using AWS S3. Additional capacity can be purchased as required and applied to all nodes within cluster. If you want to use SGWS, no license is required. With this release, there are also some limitations as to what features and functionality you can use in conjunction with FabricPool. FlexArray, FlexGroup, MetroCluster, SnapLock, ONTAP Select, SyncMirror, SVM DR, Infinite Volumes, NDMP SMTape or dump backups, and the Auto Balance functionality are not supported.

FabricPool Setup

There is some pre-deployment work that needs to be done in AWS to enable FabricPool to tier to an AWS S3 bucket.

First, set up the S3 bucket.

Next, set up a user account that can connect to the bucket.

Make sure to save the credentials, otherwise you will need to create another account as the password cannot be obtained again.

Finally, make sure you have set up an intercluster LIF on a 10GbE port for the AFF to communicate to the cloud.

Now, it’s FabricPool time!

Install the NetApp License File (NLF) required to allow FabricPool to utilise AWS.

Now you’ll do the actual configuration of FabricPool. This is done on the aggregate via the Storage Tiers sub menu item from the ONTAP 9.3 System Manager as shown below. Click Add External Capacity Tier.

Next, you need to populate the fields relating to the S3 bucket with the ID key and bucket name as per the setup above.

Set up the volumes if required. As you can see, the default of Snapshot-Only is active on the four volumes. You could (if you wanted) select the individual or a group of volumes that you wanted to alter the policy on in a single bulk operation via the dropdown button on top of the volumes table.

Hit Save. If your routes to the outside world are configured correctly, then you are finished!

You will probably want to monitor the space savings and tiering, and you can see from this image that the external capacity tier is showing up under Add-on Features Enabled (as this is just after setup, the information is still populating).

There you have it! You have successfully added a capacity tier to an AFF system. If the aggregate was over 50% full (otherwise why would you want to tier it off?), after 48 hours of no activity on snapshot data, it will start to filter out to the cloud. I have shown the steps here via the System Manager GUI, but it is also possible to complete this process via the CLI and probably even via API calls, but I have yet to look in to this.

One thing to note is that whilst this is a great way to get more out of an AFF investment, this is a tiering process, and your data should also be backed up as the metadata stays on the performance tier (remember the 3-2-1 rule). So, when you are next proposing an AFF or an all flash aggregate on a 9.2 or above ONTAP cluster; then consider using this pretty neat feature to get even more capacity out of your storage system or what I like to now call your data fabric platform.

Casting Our Eye Over HCI

My previous blog post HCI – Hero From Day Zero discusses my initial findings and setup of NetApp’s Next Generation HCI solution. After reflecting on these for a while and chatting with my colleague @WelshMatador I have put together several videos around NetApp HCI where we take our conversation on air for your viewing pleasure.

In the first of our videos we tackle some of the HCI out of the box basics such as “What cabling does a NetApp HCI installation require?” or “How should I do this?”

In part two we look at the very slick NetApp Deployment Engine (NDE) and discuss initial setup. Part three looks at growing your environment and the process involved.

Over the next couple of weeks, we will add more videos covering different aspects of NetApp’s HCI platform so please check back soon as this page will be updated.

 

 

 

 

 

And if you haven’t got the bandwidth to stream the above here’s a nice close-up of the system used for the videos.

HCI platform

HCI – Hero From Day Zero

After a great reception at NetApp Insight 2017 (it was so good that actual orders pushed back our demo system), and thanks to NetApp, I have finally got my hands on their exciting new portfolio product.

First Impressions

We received an 8-node setup, 4 storage and 4 compute, which turned up as a pallet of IT equipment, which was a little unexpected at first, but upon review, it does mean that the hardware is a lot more manageable to get from the box into the rack. It all comes nicely packaged in NetApp branded cartons. The storage nodes also have the disks individually packaged for adding into the chassis.


So, upon first inspection of the blades/nodes, I can see NetApp have partnered with a hardware vendor who is renowned for producing server hardware. They feel sturdy and are well crafted. Adding them into the system is a smooth process and doesn’t need any excessive force, something I have seen with other blade systems in the past. Starting from the bottom and working up, we racked the two chassis to begin with. The important thing to note is the 3 strips of protective clear plastic film along the top of each chassis MUST be removed before installation. Once racked, it was on to adding the additional nodes into the chassis. We opted for a two and two approach with the two compute nodes in the top of the chassis and two storage nodes below.


The reason for this was there is extra air flow via the top of the chassis (hence removing the film) which will be of benefit to the compute nodes. But this is only a recommendation, any type or size of node can occupy any of the available slots. If you add a storage node to the configuration then you will also have to insert the accompanying drives. Again, make sure you add these into the corresponding bays in the front


Getting Setup

In preparation for deploying our HCI equipment, we have also deployed a management vSphere cluster (6.5) and in here amongst other things, we have created our PDC and SDCs each sharing responsibility for AD, NTP, DHCP for both the mgmt. and iSCSI networks, and most importantly, DNS. I can’t stress enough when it comes to networking: 9 times out of 10, it’s a DNS issue. Make sure you get your forward and reverse lookup zones correct.

What I have learned from the time I have spent with the NetApp HCI platform is understanding what this system requires from a networking perspective and setting that up is key to a successful deployment. The team and I had reviewed the documentation available on the NetApp support site (the prerequisites checklist and the installation workbook), yet our first attempt failed at 7% which we traced to a VLAN configuration issue on the switches. After that, it was pretty much plain sailing.

As you can see from below, we left a monitor connected up to one of the compute nodes and we can see it deploying ESXi in standard fashion.


We have factory reset the HCI kit several times to get an understanding of the different options during the NDE process, and it’s fair to say they are pretty self-explanatory (each option has a blue “i” next to it which goes into detailed information as to what you are configuring). One thing we did note is using the basic networking wizard and then flipping over to the advanced helped pre-populate pretty much all the fields, but gives you more control of what is assigned. We wanted to move the mNode from next to the VC to next to the MVIP for the SolidFire Cluster and simply changing the digits of the last octet turned the box red as unverified. To enable the engine to check the IP against everything else on the page and check it’s not in use requires you to delete the decimal point associated with that octet. You also can not separate the vMotion and management subnets without the use of a VLAN tag. So if you don’t add the tag before trying to separate these, it can be a bit unclear as to the engine’s methods without understanding how the physical network topology is designed to interact with the HCI platform. It’s good to see also that you cannot proceed until everything is properly inputted. Another handy feature is the ability to download a CSV copy of all the variables (passwords are redacted) just before you hit deploy.

By repeating the setup process, we got an idea as to the timing it takes and from your final review of the NDE inputs and clicking “Looks Good, lets Go,” we were seeing 6.0u3a deploy in just over 35 minutes and 6.5u1 going to the 55-minute mark. When watching the progress bars, it’s clear to see more time is spent deploying the VCSA with 6.5 which probably explains why it’s a lot easier to use and less buggy than its predecessor; I have been trying to move over to the appliance for a while now and with the work I have been doing with this HCI platform and 6.5 I am now a convert.


Up and Running

Once the NDE is complete you can click the blue button to launch vSphere Client which will connect to the FQDN as entered during the NDE. Once we are logged in to the client, we can see from the home landing page that the plugins for the SolidFire have been added – NetApp SolidFire Configuration (for adding SF clusters, turning on vVols, user management, joining up to mNode and NetApp SolidFire Management (for reporting, creating datastores and vVols, adding nodes drives etc)


NDE with create a DataCenter and containing Cluster with HA and DRS enabled and add the hosts to this. It also creates two datastores on the SolidFire cluster of 1.95TB in size and VMFS6 with SIOC enabled. Sadly, the current management plugin will only create VMFS v5 for any datastores you wish to create after initial deployment, so if you need/want v6 then you are going to have to destroy and recreate the newer version onto the LUN, a minor issue but could become laborious is you have quite a few datastores. What is nice though is you can configure the SolidFire cluster to provide vVols & datastores at the same time, and with it being a SolidFire back end, you get the guaranteed quality of service you expect for any storage provide from that platform.

Take Away

I have to say that I have been impressed by the HCI platform. From getting it in the door to racking and stacking and then progressing through the NetApp Deployment Engine, it has become a smooth and risk-free process. The guard rails of the NDE allow for a robust vSphere deployment yet allow you to tweak parts to fit it to your environment (e.g. you don’t have to deploy a new vCenter, you can join an existing) I also have mentioned above that it has helped win me over to using the VC appliance, and there will be no going back now for me. Having spent time working on the kit, I can fully understand the reasons NetApp have made in providing separate storage and compute nodes, and I am confident that customers will also see the benefit to truly flexible, independent scalability in an HCI deployment, not to mention the performance and fault tolerance of a shared nothing architecture. I look forward to the first customer reviews, and from the amount of quotes Arrow have been putting together recently on this product, it’s not going to be long before it is established as a leader in this market segment.


Next steps

So when it came to time to test the HCI platform I was chatting to a friend who informed me there was a VMware fling that could help. Now I had heard about the wonderous nature of the flings whilst listening to @vPedroArrow and @lost_signal on the Virtually Speaking Podcast a while back but in my current line of work hadn’t a need to use them until now. In my next post I will go into more detail on these and look at some of the results that I received.

 

Time to Stop with the Legacy Backup Solutions

A couple of months ago I attended the UK Veeam ON Forum in London and once my brain had registered that I am not on my way to the Aviva Stadium with several hundred of my fellow countrymen, but am in fact at an IT event things started to make more sense. This was to be a day packed full of information Veeam wanted to share with their partner community, and I am glad to say they didn’t hold back. My key takeaway from the day is the amount of people who lack the proper ability to meet recovery objectives defined within their business and as organisations move to a hybrid cloud that inability is only increasing. When surveyed by ESG, 4 out of 5 respondents disclosed that their current infrastructure cannot keep up with the IT needs of their company and users suffer. Follow up questions in this survey also highlighted that 3 out of 4 organisations were exceeding their allowance for data lost due to an outage. This is mind blowingly huge!

These two areas are referred to as the Availability Gap and Protection Gap and it is clear to see that with the drive for businesses to do more with less and leverage the public cloud we are seeing data bears these burdens and is ultimately losing. I have long proposed to companies “How long can you let your employees sit idle? How damaging for your brand would it be if you could not transact orders or your website was down? And how much money would your business loose in that period of time?” These are just some of the questions that must be asked and answered correctly throughout a piece of data’s life to properly assess and implement a strategy to accurately and properly defend and deliver it with speed back to the business. There seems to be a drive for organisations to implement new technology and upgrade existing hardware but the protection and availability are a secondary or tertiary response. Data is a company’s most valuable asset. Caesar’s Entertainment Corporation’s single most valuable item on their books isn’t the faux Roman architecture hotel in the middle of Las Vegas strip; it’s the company’s big data loyalty program that has been running since 1998 and hold over 45 million member records; with a figure of $1B attached.

This brings together two interesting points how, many companies out there are not taking the time and care with data that concerns me as a customer and also, how insufficient the understanding of the value of the data a company holds about a person or organisation. As a personal consumer, I would be slightly put out and may consider moving my custom if I contacted my butcher for instance to ask for “the same cut I had last week”, yet he had no record of the transaction and my meal is not turning out as nice as I wanted it. But in business what if I were to contact a supplier and say, “you know that last batch of X was great, can I get another 1000 or 10,000 ASAP and put it on my account” well without the proper information at hand I can neither see what that order was to fulfil it again, check if the customer has credit to place such an order and where to ship it once produced. Care and consideration must be taken with this data and how we safeguard it should be as intrinsic as the application that creates and accesses the information.

So why Veeam you may ask? Simply put because they understand the 24 by 7 always on business environment we now live in. This is a 21st century company not constrained by 1980’s backup architectural design and a leader of innovation within their field. So let’s stop talking backup and look at what we really demand: Availability. Over the coming weeks I am going to be looking into why Veeam and their ever-increasing portfolio of products, what it can deliver, how it can quickly and effortlessly meet a company’s availability and protection needs whether on site or in the cloud.

To read the full ESG commissioned report look here

Reflecting on VMworld EMEA 2017

Back in September I found myself on a Sunday morning flight from London to Barcelona to explore one of the largest technical conferences held this side of the pond, VMworld Europe. The conference kicked off on Monday with a partner day and a general session where Pat Gelsinger @PGelsinger CEO said two things that really stood out for me. Number one, he thanked the audience for their continued support of VMware products and requested that we “go all in” with the ever-increasing portfolio. Number two he said, “Today is the slowest day of technical innovation of the rest of your life” and boy is he not wrong.

Now I have been working with VMware for well over a decade playing with ESX and GSX and picked up my fist VCP certification on version 2.5. But focusing on the storage industry and products in that space my VMware focus had waned, and whilst I had heard mention about new products and features that VMware had developed, until I got to this conference I didn’t realise how vast and how varied this portfolio had grown. To hear Pat and co band around the slogan if you will of “Any application on any device on any cloud” you get to see how much of a reality this is.

This message really drove home during day two’s keynote when Purnima Padmanabhan @PPadmanabhan VP Product Management, Cloud Management Business Unit and Chris Wolf @cswolf VP and CTO Global Field and Industry used the case of fictional pizza company, Elastic Sky Pizza, to show how this company that hadn’t adapted to the changes in the market place were now circling the drain. At the point that we enter the story a new CTO is literally just been appointed to the role and it’s their priority to turn the failing new website, app and ordering system around and make sure that it is delivered on time and on budget.

This section of the keynote carries on for nearly an hour, and while it does feel a tad long in places, it is really interesting to see how the many different businesses within VMware had developed products that interact with each other to deliver a common goal.

AppDefence is one product that stood out. This is a serious piece of kit that has the intelligence to understand the intrinsic way applications and the flow of data should work; and reports if anything deviates outside its allowed parameters. This is a huge leap in proactive application security allowing both the developers and security teams to work hand in hand to deploy robust applications. I feel that in time this will become part of a standard VMware environment, as I believe what AppDefence does in allowing you to understand what exactly is going on within your environment is the missing feedback loop you need to truly deliver an SDDC.

We also got a look at VMware cloud on AWS using elastic DRS and also HCX. Now these
solutions look cool and I can’t wait to try them out when I have some free time. Another great highlight was Pivotal Container Services (PKS) which lets you run an enterprise grade Kubernetes deployment on your site allowing for controlled project deployment for your DevOps teams. Not only was it easy to deploy it works hand in hand with NSX to build in security from day one.

With all of these technologies you get a sense of what VMware are trying to achieve with their portfolio, the ability to bridge the Hybrid cloud, and you can see the direction the company is heading in over the next twelve months. I recommend that if you have a chance, take a look at this recording on YouTube to truly appreciate the interaction and productivity you can achieve with the VMware ecosystem. But before you do buckle your seat belt as things move pretty fast.

If you would like to know a bit more about this and some more of what happened in Barcelona then please have a listen to the episode Reflecting on VMworld 2017 of Arrow bandwidth with both myself and Vince Payne.

A Tale of Two Cities

This year, I was fortunate enough to attend both NetApp Insight events, thanks to @SamMoulton and the @NetAppATeam; and whilst there are normally differences between the conferences it’s fair to say that this year they were polar opposites. Whilst the events team try extremely hard to make sure there is minimal difference between the two, this year there were things outside their control. Yes, there were pretty much the same topics and breakout sessions with the same speakers, and there was an even representation of sponsors and technology partners at both events, yet things were different.

Without going into too much detail, on Sunday 1st of October, the worst case of domestic terror within the US happened to coincide with the arrival date, location of NetApp Insight Last Vegas, and the hotel where majority of attendees were staying. This changed the overall mood and perception of the event. Las Vegas turned into a more sombre affair with more a perfunctory feel that was only right given the events that occurred. Yet, NetApp dug deep, showing skill, resolve and sincerity, and they still delivered an excellent event to be proud of. I would also like to give a huge thank you to the LVPD, first responders, EMT and anyone else associated in helping those who were caught up in this tragedy. During this tragic turn of events, true human nature and kindness shone through.


The NetApp Insight event that occurred two weeks ago in Berlin was a completely different beast. It was our fourth year at the Messe Berlin & City Cube, and like any recurring venue, it started to feel more familiar—like an old friend that we hadn’t spent much time with recently. Some in the partner community objected to the event being held in Berlin for the fourth year in a row. From my perspective, the high quality of content delivered in breakout sessions during the conference is the main draw for delegates, and whilst it’s nice to visit a new city, you have to feel for those who work in the Americas (just a little bit), where pretty much every conference is held in Las Vegas. (Veeam ON, which was held in New Orleans this year and Chicago next year, being the major exception.) After four years, I feel I’m only scratching the surface of what Berlin has to offer. I’ll probably miss not being there next year, but we are following the crowds to Barcelona and we will be there in December 2018.

Having attended many NetApp Insight events over the years, it’s fair to say that on Day 1 of Insight Berlin, there was a different, more positive, feel to the conference, one that has eluded it over the last five or so years. Those showing up were excited to what the four days would hold. The employees speaking or manning booths were eager to be meeting and discussing the advancements made in the past 12 months. No longer driving home the message about speeds and feeds but talking about the services and solutions that NetApp products bring to the table, and with over 250 services and solutions, that’s a massive number of ways to make a data fabric that’s fit for you. It was great to see partners from across EMEA wanting to learn more about the Next Generation Data Centre (NGDC) portfolio and understand how best to adapt it to their individual customer requirements. I also sat in on a few sessions to top up on those missed due to being cancelled in Las Vegas, and it’s fair to day that everyone’s heads (mine included) were more in the game.


Berlin started with a drum roll, quite literally, with the curtain raising act from the amazing Blue Devils Drum Line, all the way from California (check them out on YouTube). From @Henri_P_Richard, we heard a strong confident message about how to change the world with data. We learned how digital transformation is gaining momentum and empowering businesses as they move from survivors to thrivers in our data-centric era. Henri quite nicely backed this up by pointing out that NetApp are the fastest-growing of the top 5 total enterprise storage system vendors. It is also the fastest growing All Flash Array vendor, the fastest growing SAN vendor, and the world’s number 1 branded storage OS. But what really underlined these points was the reported earnings that came out Wednesday evening, which moved NetApp’s share price from $45 to $56 (as of writing), on the back of net revenues increasing 6% YOY, and raising their outlook for the rest of the fiscal year.


There were several stand out points made at Berlin. At the top of probably everyone’s list is the bold move that NetApp are making into the HCI space and the excellent untraditional tack they have taken to position this technology. Listening to NetApp’s detractors, I feel that there are a lot of established first-generation HCI vendors that fear what NetApp will bring to the table. (You only have to look at the AFA marketplace and how NetApp have delivered 58% global YOY growth.) As a distributor, we applied to get a demo unit to help promote this product, but due to the huge demand from customers, that box has had to slip down the priority delivery chain, which happens to show that despite the FUD being bandied about, customers out there really value the message and benefits that NetApp HCI is bringing.


One of my highlights came during the Day Two general session when Octavian Tanase and Jeff Baxter outlined the direction NetApp are heading in over the upcoming months. One of the many interesting technologies that they demonstrated during this section of the keynote was the Plexistor technology, which NetApp acquired for $32M in May 2017. With Plexistor, NetApp are able to not only increase the throughput over AFF by ten times, from 300K IOPS to 3M IOPS, but they also demonstrated that they can reduce the latency by seventy fold from 220µs to 3µs!! Now, that’s a performance improvement of two separate orders of magnitude.

This is, at the moment, a very niche technology, which will only benefit a very small number of the storage population to begin with; but it does illustrate that NetApp are not only delivering some of the most advanced endpoints on the globe today but also the boundaries they are pushing up against and stepping well past to stay at the forefront of data management. Working with Storage Class Memory and NVMe to deliver what will be the norm of next generation of storage appliances, NetApp are demonstrating that they have a clear understanding of what the technology is capable of whilst blazing a trail that others desire to follow.

For those of you how missed the event (shame on you), NetApp UK are holding a one day event on the 12 of December at their UK main office (Rivermead) for partners; and for those with a NetApp SSO, you can now access all the great content recorded during both insight events and download copies of the presentations to review at you leisure. When you have done all that, ask yourself, “How are you going to change the world with data?”

Time it’s on your side

We all seem short on time these days. We have conference calls and video chats to save us travel time when we can. We use TLAs (three letter acronyms) whenever possible. We are forever on the hunt for the next “life hack” or “time saver”.

NetApp Insight is getting closer, and if you’re planning on attending, hopefully you’ve already started mapping out your schedule, but if you haven’t then fear not. As an IT professional, your time is extremely valuable. Time is precious to myself and to my employer and you both want to get the most out of a day but with Insight 2017 stacked with so many great sessions this year, how can you choose?

Whilst everyone’s interests are different, I thought I’d give my pick for the sessions that I’m looking forward to at Insight Las Vegas. Whether you’re a first timer or an old guard Insight veteran, I hope this will help you be smart with your time or as the Stones put it, “time is on your side.”

13145-1 – Data Privacy: Addressing the New Challenges Facing Businesses in GDPR, Data Privacy and Sovereignty – Sheila FitzPatrick. GDPR is a critical challenge that affects companies all over the world, not just in Europe. I have heard Sheila FitzPatrick speak on this topic several times, and every time, I leave with some really useful info about how to help customers move towards legal compliance with the imminent deadline looming (May 25, 2018). This session will help you elevate the conversation around GDPR with details about how to help your business avoid those hefty fines.

16365-2 – First-Generation HCI versus NetApp HCI: Tradeoffs, Gaps and Pitfalls
Gabriel Chapman
.
HCI is definitely going to be the hot topic at this year’s Insight and with SeekingAlpha highlighting NetApp as one of the likely winners in this space. Here we have an opportunity to hear from Gabe who has spoken at Tech Field Days in the past with great passion
on the topic and has been working hard with the SolidFire team to craft this solution. This session will highlight the advantages of this solution over traditional HCI offerings and their limitations, as well as why it will appeal to those who see a benefit in next generation infrastructure.

16594-2 – Accelerate Unstructured Data with FlexGroups: The Next Evolution of Scale-Out NAS – Justin Parisi. For those of you who haven’t heard the Tech ONTAP podcast (what a shame!), this is a session presented by one of its hosts and will give you an idea of the great content it puts out. During the session, Justin Parisi looks at why FlexGroups are winning in the unstructured data space and how it improves upon the FlexVol. Just don’t ask him about SAN…

12708-2 – How NVMe and Storage-Class Memory Are Reshaping the Storage Industry – Jeff Baxter and Quinn Summers. These are two very knowledgeable presenters who deliver information rich content and I’m happy to see them giving a session together. This session looks at NVMe which NetApp is currently leading the field in capacity delivered to its customers and Storage-Class memory and how these technologies will affect data centre design and application deployments in the near future. For those wanting to keep at the forefront of technology advancements and were unable to get to the Flash Memory Summit this is the session for you.

16700-2 – FabricPool in the Real World: Configurations and Best Practices – John Lantz. FabricPool was one of the key features of the 9.2 payload and when it announced at last year’s insight general session was a mike drop moment. Now with the availability of the required ONTAP version this is an excellent way to hear how best to put that into practice and who better than John delve into the core part of this technology, design considerations and walk you through how to deploy one of the more fascinating parts of the data fabric.

18342-1 – BOF: Ask the A-Team – Next Generation Data Centre – Mark Carlton. I would be remiss if I didn’t call this out as a session of note (and yes, centre IS spelt with an R-E). This is a “birds of a feather,” session which means it’s more of an open conversation or Q&A rather than a lecture. Hosted by Mark Carlton with several members of the A-Team on hand to provide honest opinions, feedback, and tales from the field with the Next Generation Data Centre, you should leave this session with a greater understanding of how to make the move to NGDC.

18442-2 – Simplify Sizing, Deployment and Management of End-User Computing with NetApp HCIChris Gebhardt Another session covering this year’s H O T topic. In this breakout Chris will go in to what you need to know to have a successful deployment of NetApp’s First-generation Enterprise HCI offering. This is likely to be a popular session so make sure you book early.

17349-2 – Converged Systems Advisor: Simplify Operations with Cloud-Based Lifecycle Management for FlexPodWyatt Bennett and Keith Barto. Emerging from a recent acquisition by NetApp is this superb piece of software that allows you to graphically explore the configuration of a Flexpod against a CVD and make sure that you are correctly configured. If you have anything to do with Flexpod this is probably one of the more interesting developments in that area of the portfolio this year and at this session you can hear from 2 of the people who have been building this product for several years and gain a better understanding as to how it can benefit your deployments.

18509-2 – VMware Plugins Unify NetApp Plugins into a Single Appliance – Steven Cortez. With the recent update of the plugin for vSphere, here is your one stop for a good look at what has changed with Steven Cortez. Backup can seem like a beast of burden, but it needn’t be when you look at this offering and see what this new plugin can provide, whether that be over the old VSC dashboard, improvements to VASA integration and SRA functionality, or even VVol support. In this session, Steven will cover the more popular workflows within the unified plugin.

17930-3 – Virtual Volumes Deep Dive with NetApp SolidFireAndy Banta. Andy will be telling you why you want to flip the switch and move from traditional datastores to VVols and all the benefits and loveliness that comes with implementation a next generation VM deployment. Some of the conference attendees may feel they know ONTAP like the back of my hand but maybe this is the year to give SolidFire some serious focus and this is one session that will show you why.

26420-2 – Hybrid Cloud Case Studies – Scott Gelb. Come to this session to hear Scotty Gelb‘s top reasons for why you should embrace and implement a hybrid cloud strategy to the benefit of your company and customers. Based on customer experience, in this breakout, he will cover the considerations needed for a successful deployment and how to migrate your data to the cloud.

It’s also worth noting that whilst the sessions are the real meat on the bone for the conference (and you do get access to the content after the event), there’s lots more to do! The general sessions are always enlightening, and I look forward to what George Kurian will have to say. Then there’s the ability to give honest feedback directly to the PMs. Get your certs up to date (these have all been updated since Insight Berlin 2016) or spend some time in the hands-on labs. The Dev Ops café was also a hit last year. The list goes on and on.

The best advice I can give for attending is to do your homework and plan what you want to get out of the conference. Plan for lunch. Plan for some downtime during the day. Plan for a “working from home” day after the conference to get caught up, as you will no doubt be shattered. Maybe even plan to have a go at tumbling dice whilst in a casino. Plan for new friends and new faces, and most of all, plan to have a good time, because before you know it, you’ll be singing “It’s all over now.”


Setting Sail for Uncharted Waters

Today might be big day in NetApp’s history. Not only is it celebrating the companies 25th year, a 3 quarter in a row of revenue growth, 140% year on year growth in the All Flash Array (AFA) market segment or sitting in no. 2 position on revenue of AFA vendors (IDC). Nor is it just celebrating its SAN market share growing 3.6x faster than its nearest competitor, over 6.4PB of NVMe shipped or it’s SIX IT Brand Pulse awards for its Scale Out File Storage- FlexGroup. Its starting the week with a product announcement.

The 6 IT Brand Pulse awards

And whilst there may be cake and balloons at offices on East Java Drive and Kit Creek Road the company will be focused on moving forward. Today the company takes a step outside the Storage and Data Management field that it has dominated for two and a half decades, and into any area of the IT industry that has generated a lot of interest over the last couple of years that is still relatively new and unmapped the Hyper Converged Infrastructure market.

Now some of you may be saying that NetApp are quite late to the HCI game; and two, what can they possibly bring? Now remember NetApp were late off the blocks with an All Flash Array but look at the opening paragraph again to see just how well that’s now going and for what they can bring to the game then please read on.

Some of you may remember the version of EVO:rail that NetApp brought out a couple of years ago and feel they should stick to doing storage products; but the difference between that and todays launch is the fact that this time NetApp have solely led the development of this product, rather than having to follow a blueprint VMware put together for a wide and varied list of hardware vendors.

First generation HCI solutions were designed with the simplicity of deploying virtualisation technologies in mind, yet with this approach and a race to market they created limitations on performance, flexibility and consolidation. By claiming they can remove application silos by mixed workloads these limitations meant they ultimately failed at scale. These first-generation hardware offerings provided both compute and storage within the same chassis which meant that resources were tied and both requirements had to be scaled in parallel when either became low or exhausted.

NetApp approach the HCI arena and the limitations of current offerings with the Next Generation Data Centre at the core. The 4 key aspects that make up this HCI solution are: Guaranteed Performance, Flexibility and Scale, Automated infrastructure and the NetApp Data Fabric. It provides secure efficient future-proof freedom of choice.

Several of the benefits that people love about SolidFire is its ability to scale with ease; and growth is a key feature of this HCI infrastructure. With the ability to grow compute and storage independently and regardless of what your applications need, you can start small and scale online and on demand with a varying amount of configuration options to satisfy any enterprise environment. This in turn allows you to avoid overprovisioning say compute, whereby incurring unnecessary increased licensing costs, or storage whereby having excessive amounts of flash media present that would be associated with scaling traditional first generation HCI solutions.

Out of the box this solution utilises the NetApp Deployment Engine (NDE) to eliminate the majority of manual steps needed to correctly commission the infrastructure, combined with an intuitive vCenter plugin and also with a fully programmable interface to complement this scalable architecture to be truly a software defined HCI solution.

The all important front bezel

There will be a lot of interest in this enterprise-scale hyper converged infrastructure solution over the coming days and weeks I applaud NetApp for making the move into uncharted territory and I look forward to reading more about it ahead of its launch later in the year; as this solution combined with NetApp’s Data Fabric will honestly allow you to harness the power of the hybrid cloud.

Certification – More than just a tick in the box

Roughly six weeks ago I received an invitation to participate in the item development workshop to update the NetApp Certified Implementation Engineer Data Protection Specialist exam. The premise was to take the exam from its current state and bring it up to date as in the last 2 years a lot has changed within NetApp ONTAP and its associated protection methodologies. To get a good idea of how much has changed simply look at the ONTAP 9.1 release notes, even in the data protection section it talks about NVE, RAID-TEC, SnapLock just to mention a few, so it was a honour to be invited to help update the exam and something I was looking forward to.

Sign greeting my arrival at RTP building one

Just over a week ago was the time to undertake this workshop and on a lovely sunny Sunday morning I boarded a plane to head to NetApp’s Research Triangle Park (RTP), North Carolina USA campus where the workshop was to be held. The following morning at 9am sharp we started the week long activity of modernising the exam to the most recent GA release of ONTAP. The workshop was run by an independent company who specialise in writing exams for certification. Their job was to lead the workshop making sure we kept to the exam blueprint, kept it to the right level of difficulty and asked the question as directly as possible. One of the first things we covered was the difference between assessment and certification and for those of you who may be unaware the difference is probably two fold. For assessment all the information required to pass is contained with a structured course material (e.g. PowerPoint; pdf course notes) whilst a certification draws on many various pieces of information ranging from course notes to technical reports to documentation and even industrial knowledge. The other main difference is that a certification needs to be able to stand up to any legal challenges to the content within. So with that we got down to work.

With all the changes going on with the portfolio and even within ONTAP it was great to get together with 9 other individuals who also shared a desire to update this exam and see not only how they viewed ONTAP had changed over the last two years but also the use cases and deployment models being adopted for the technology. Over the next few days we reviewed the current question pool and then set to work writing new ones. These were then assessed as a group to see if they were relevant and of the right difficultly, to name just two of the measurements we judged each question on. It was also good to see that the questions proposed for the exam were honest and fair with no intent of trying to trick candidates.

exam1

It was both a long and rewarding week where I’m sure all in attendance learned something new. It also shone a light on the amount of work and effort that is put into constructing a certification exam by NetApp, as they understand the benefit to the candidate in undertaking the hard work preparing for, taking the exam and also the badge of honour received for the certification. I have often felt that by obtaining a certification it shows that you have a desire to know and understand the technology in question and also that you have taken the time to learn its best practices. They can help differentiate you within your company or when you apply for a new role. Just to make sure I’m up to date with what is going on I usually take an exam during NetApp Insight for my own personal benefit mainly but it helps reinforce the value I have to offer to the team when I am back at my day job.

Before we knew it Friday afternoon had rolled around and we had completed all the tasks required from the workshop, which meant that in a few weeks from now the update exam NS0-512 will go live. A big thank you goes out to NetApp Certification Program for inviting me to the workshop and also to the nine other individuals and the independent test company I had the pleasure of working with for the week. I left ready to talk SnapMirror and MetroCluster with anyone who wanted to listen. So if you get the opportunity to help write or update an exam I would highly recommend it, and before you start to contact me for help the answer is “yes it’s on the exam.”

NOTE: For more information listen to the upcoming Tech ONTAP Podcast Episode 78 NetApp Certifications NCIE featuring the NetApp A-Team