Getting Started with DevOps

Part of my job allows me to travel and meet partners up and down the UK helping enable them to properly sell the NetApp portfolio. One thing I have noticed is that even the more proactive partners are still chasing after the modernize aspect of the three IT imperatives as an area they are going to market with; some seem to be slowly adopting build and yet the majority are avoiding inspire.


These 3 imperatives align with the three key parts to the Data Fabric, and each has a place within every organisation. Making sure your customers understand the Data Fabric story and how it relates to their business is something I task each partner with and if need be, provide support.


So for all my presentations and education there still seems to be a chasm that needs to be crossed by our partners, and with some of the feedback I have received it has come to my attention that there are quite considerable differences between selling new hardware and selling cloud products and services.

One of the pain points seems to be a lack of understanding and training around cloud environments and the fact they all use a different nomenclature leads people often to either search out a Rosetta Stone or give up.

First I would suggest that if anyone is serious in getting to know what the DevOps community is all about then you should read “The Phoenix Project” (by Gene Kim, Kevin Behr and George Spafford) and if you enjoyed it the accompanying material “The DevOps Handbook” (Gene Kim, Jez Humble, Patrick Debois and John Willis). These two books provide a great insight into what is happening within IT oragansiations across the globe today.


If you have read it and don’t know where to go from there; or if a 250-page IT novel doesn’t interest you then Arrow can help.

Let me start by ask to you whether you know your Jenkins from Jarvis, your Trident from your spear, SCRUM from your ruck, CI/CD from your AC/DC? Or how about your containers from your Tupperware, Mode 1 from Mode 2, GitHub from a wine bar, Kubernetes from K8s, Prometheus from Sulacco or your Hedvig from Hedwig?

Do you understand modern scalable dynamic application development and how these are deployed in today’s hybrid cloud world using microservices, services meshes and declarative API’s?

If you have issues identifying the terms above and feel they are more akin to Pokémon than to IT well fear not! Today we are launching the Arrow Build series.

The idea behind this is for it to be a series of events to help you and your organisation get up to speed and have the skill set to work with these innovative application developers and born in the cloud businesses.

Launching with the first event in our London office, this half day hands on session will introduce some of the terms you are likely to hear and also provide a great look into the modern application development framework.  If this is successful and there is demand for it up North then we may repeat the event or host something similar in Harrogate, but I would strongly urge all partners to come and attend the first session. Not only will you gain some new skills (which to be honest you want to so you can put them on your LinkedIn profile) but will allow us to create and grow a UK community. With the British government (when not arguing about Brexit) striving to make us a world leader in AI (we can argue the AI v ML stance later) many of these skills are applicable and if you prefer your GUI to CLI then there are plenty of things we can do to help you understand the landscape.

With your wittiest T-shirt on, I look forward to seeing you in London on the afternoon of the19th of September, bring along your colleagues, stay and have a beer with us after, and until then:

While (alive)
{
   eat();
   sleep();
   code();
}
Advertisements

Racing Ahead Into FY20

This weekend sees the world superbike championship head to Imola where hopefully Alvaro Bautista can continue his run of fine form and dominance as the Ducati team prepare for their ‘home’ round. Also preparing to lead the pack and leave the competition in the dust is NetApp as they start their new financial year. To help reinforce the benefits that their technology can bring to customers who are looking to be more agile within their cloud strategy, NetApp have come out today with a raft of product announcements that should provide the right amount of acceleration to get the start they need and push them into pole position for this FY.

Firstly, they have announced the imminent arrival of their next iteration of the world’s number one branded storage OS, ONTAP 9.6. The core tenant of this release is to make things easier. Easy to use, easy to gain operational efficiency, easy to incorporate added security and data protection features; and easier to evolve into a hybrid cloud deployment. NetApp position themselves as the Data Authority for the hybrid cloud by helping traditional infrastructure buyers realize their business objectives with modern data centre and next-gen data centre capabilities whilst also helping CIOs, Cloud & Enterprise Architects, and DevOps teams realize the benefits of cloud; and this release further strengthens that stance.

ONTAP 9.6 refines the features within this operating system to maximize uptime whilst protecting and securing data across the hybrid cloud. By simplifying things like standardising the API to use REST, allows easier integration into other developer tools, and with new Ansible modules also being released to utilise this change it opens up the attributes of ONTAP to a whole new audience.

With security at the fore front of almost every IT decision it’s great to see some of the improvements that have been made there. From 9.6 onwards all newly created peer relationships will be encrypted as well as in transit encryption. Also with inline Aggregate Level Deduplication which came in ONTAP 9.2 now becoming common practice and the fact that NetApp Volume Encryption is also becoming the norm it was inevitable that we would see these two features facilitate the need for and development of NetApp Aggregate Encryption (NAE). This allows even more space savings whilst all volumes in the aggregate share the same encryption key. But if you don’t want all the volumes to share the same keys then Multi-tenant Key Management is the new feature for you or your cloud provider.

Another key element within this release is the fact that FabricPool now has more target options by including Google, which supports Multi-Regional, Regional, Nearline and Coldline; and also the addition of China’s largest cloud provider, Alibaba supporting standard and infrequent access. Add these to the list of AWS, Azure and Softlayer for public cloud targets and StorageGRID and Cleversafe for private cloud and throw in the volume tiering policies of None, Snapshot-only, Auto and All it gives customers a huge amount of tiering options with an AFF or SSD aggregate.

The name of a couple of software products is also changing with the OnCommand part of the naming scheme starting to be removed. OnCommand Unified Manager (OCUM) has been rebranded ActiveIQ Unified Manager and OnCommand System Manager is now ONTAP System Manager. Personally, I think this is a good thing as the OnCommand piece didn’t emphasise the products functionality something that the new schema does. The key feature that stands out to me when we look at the newly laid out GUI is the fact that new you can get a years’ worth of performance data off the system right there on the dashboard. The inclusion of new views of Health and Network Configuration is also a marked improvement over the previous version.

With ONTAP 9.6 NetApp are moving from a mixed long-term and short-term service model for alternating releases to every version going forward being on the long-term service model. This means that for certain customers moving to every release is now a reality as the short-term service model was a deterrent to some. To get to ONTAP 9.6 you first need to be on 9.5 then jump to the latest. Thankfully ANDU (Automated Non-Disruptive Upgrade) can help with this procedure.

NetApp first introduced NVMe support in ONTAP 9.4, with the launch of the A800 for end to end NVMe connectivity and the ability to connect to an A300 or A700(s) via NVMe/FC. With ONTAP 9.5 they brought in Asymmetric Namespace Access (ANA – Storage failover) and Max Data and with 9.6 this is also a key area of focus.

The highlight in this release is the expansion of operating systems supporting NVMe. Whether that be more host operating systems support end to end NVMe connectivity with ANA or the ability to set a QoS minimum on there, or the new NetApp Validated Architectures (NVAs) for utilising NVMe; this emphasises the refinement and pedigree that the number one Storage OS displays as it strives to stay ahead of the competition and delivering new cool features to customers to use via a simple upgrade.

But why you might ask are there all these improvements for NVMe which as you may have guessed it; with this release NetApp are bringing out another end to end NVMe platform. The A320 is the new mid-range NVMe system providing extremely low latency similar to that of the A800 of 100 microseconds. With eight 100Gbe ports onboard each controller with 2 expansion slots this is a system not only suited for use in AI environments but brings the forefront of storage development to any IT project and also ties it together with the Data Fabric.

In conjunction with the A320 release NetApp are adding NVMe expansion to the portfolio with the use of the network attached shelf the NS224. This 2 rack units high 24 drive shelf will support 1.9TB 3.8TB and 7.6TB drives available at launch and the shelf connects to the A320A using RoCEv2. With the ability to add two shelves at launch to the system there’s plenty that this system can offer. Its like getting out on the open road on a Panigale V4 R.

One of the nicest features that ONTAP 9.6 brings is in the ability to now create a MetroCluster with the entry level engineered systems. Yes, that’s right the A220 and FAS2750 can now participate in a 4 node IP MetroCluster. This will be great news in the UK where we have quite a few companies that want all the benefits of a MetroCluster, but the starting point was too out of reach for their infrastructure. To further decrease the initial costs these versions can also make use of shared inter-site switching. So, if you have spare bandwidth on your current existing infrastructure then you can utilise this in your MetroCluster configuration. I am excited to see how this pans out and have already got one customer who I know will want to revisit the MetroCluster conversation from tomorrow.

 

 

 

 

 

This is just a few of the features and announcements that have been made today with more being announced with FlexCache, FlexGroup, ONTAP Select and NetApp Data Availability Services (NDAS) to name a few. This is an exciting time to be working with NetApp as we see the company shift gears and really open the throttle on what their technology can achieve so if you want to know more I suggest you lean in and head on over to NetApp.com

It takes a Village

The above is a favourite  saying by a friend of mine and I think it rings true more than we know. In todays modern society I think we work together and use other’s help and knowledge more and more on a daily basis. I for one know that if I can’t perform something I go looking for other blog posts or even YouTube videos on the topic of choice for advice. This could be repairing washing machine handles or how to get your Soufflés to rise; someone out there has shared a prized piece of know-how to complete the task at hand.

In the IT community the idea of working together is still alive and well. Whether you are on the network team, the virtualisation team or maybe you are a DevOps team, maybe you follow Jeff Bezos’ two pizza rule. The point of working together to better the environment has never been more true. When I first got started with NetApp I probably had more questions than answers and thankfully NetApp has a website that helped – the NetApp Community Site.

The new landing page

One of the best things about this site (and others like it) is that it puts you in touch with literally thousands of users with varying skills and levels of knowledge. People scattered around the globe in different time zones only too happy to help. One of the reasons I like working with NetApp so much is that if I had a problem or an issue I know that If I go and post something on the community site, someone somewhere would help regardless of whether they worked for a customer, partner or NetApp.

Getting to the topics of interest

The Community site has recently gone through a face lift and its new and improved user interface looking fresh and straightforward to navigate. You can easily get into sections devoted to your favourite subject be that Flash and NVMe, Python Developer discussions or topics on the newly updated NetApp U courses and exams; you can find someone to converse with. You can either search for a topic or start a new discussion effortlessly from the home page; an effort to help those who need it as quickly as possible. It also is a nice place to access blog posts. Whether that be on the official NetApp blog site or something created by the community it’s a great location to gain a distilled look at the current topics of discussion. So I would urge you if you haven’t had a look for a while then check out the new and improved, version 2.0 NetApp Community Site and who knows maybe you have the knowledge that could help out someone in need.

Oh, and on the Soufflés, it’s make sure you don’t overfold your egg whites into the base.

Making time for Insight

With Insight US just a few weeks away and as I have still to complete my session calendar for the event, I thought this would be a good time to highlight those sessions that have stood out for me in the hope that it may help you in making a decision. With 307 sessions 238 speakers and 45 exhibitors how do you distil this down to give you something manageable and meaningful?

Now whilst I normally spend more time on a particular track, it is worth asking yourself “What am I going to take home from this conference?” Now are you here to just get as much information as possible or are you here to get skilled up on a particular topic, either for an upcoming project or to break into a new area of business. This is probably something you should decide before you start going hell for leather and filling your calendar with random topics like FlexGroup (is that even a thing?).

On my first pass over the catalogue I had 38 interests which is way too many for even your most hardcore conference attendee to attend so some culling will need to be done. One thing that does bother me and probably every conference attendee, are the time slots where you have 10 interests happening at the same time and the 2 hours prior you have a big blank hole. Thankfully vendors have started to record sessions at conferences for that very reason so for those that you cannot make there is always the ability to review at some other point plus some sessions just hit you with way too much to take in so you may need to hear them a second time.

Cloud Volumes is probably going to be the hot topic this year and with 50 sessions to choose from there’s plenty on offer. The first thing I would suggest is to verify its either a Cloud Volumes ONTAP (formerly known as ONTAP Cloud) or whether it’s a Cloud Volumes Service (NFSaaS) session you have picked and attending so you get the correct information. I’m sure there will be a few people that will get this wrong this year and you don’t want to be one of them.

1228-2 Designing and deploying a Hybrid Cloud with NetApp and VMware Cloud on AWS presented by Chris Gebhardt and Glenn Sizemore is I’m sure to be a popular session and it will hopefully build on the session NetApp gave at Tech Field Day at VMworld US last month.

1261-2 NetApp Cloud Volumes Service Technical Deep Dive presented by Will Stowe is probably going to be one of those sessions people leave and mention to others you need to see. With a huge potential Cloud Volume Service will become integral into many customers data fabric over the coming year So I’d advise getting skilled up on this as soon as you can.

If you are new to all things cloud and wondering where might be a good place to start, then schedule 4117-1 Cloud Volumes Service and 4118-1 Cloud Volumes ONTAP sessions in the Data Visionary theatre at Insight Central to give you a good idea on these two technologies.

Another product name change Cloud Control is out and been rebranded NetApp SaaS Backup; but there is a lot more this SaaS suite offers so with that piece of knowledge there is a session on One Stop Backup for SalesForce 1121-2 and then head on over to 1188-2 for NetApp SaaS Backup for Office 365 to complete the picture.

With Security being a major focus in the IT industry as a whole there are several sessions of note on this subject. 1234-2 – Data Security at NetApp: An Overview of the NetApp Portfolio of Security Solutions by Juan Mojica would be an excellent place to start if you haven’t thought about how begin with such a huge undertaking.

You may want to follow that up with 1103-2 – Securing and Hardening NetApp ONTAP 9 with Andrae Middleton. Remember security teams need to get policies and procedures right 100% of the time, hackers need to only get it right once.

1214-2 What’s On Tap in the Next Major Release of NetApp ONTAP by surviving podcast host Justin Parisi (It has been a bit Hunger Games/Highlander on the tech ONTAP podcast recently) will no doubt fill up fast as any new OS payload draws in the crowd and the Q&A after that session may spill out into the halls.

1136-3 will also be popular as it covers the advancements made with SnapMirror covering best practices for both Flash and Cloud worlds.

It also looks like some of the sponsors have upped their game as well with some excellent sessions. Veeam for instance have 6 sessions to choose from which is great as they are now on the NetApp price book. 9107-2 – Veeam: Veeam Data Availability Deep Dive—Exploring Data Fabric Integrations presented by Michael Cade and Adam Bergh will highlight just some of the great reasons why Veeam have been added to the price book, and then head over to the hands-on labs as Veeam has made it into the Lab On Demand catalogue. Veeam also have a data exchange white board session 8102-1 – Veeam: Availability Outside the Datacenter: Public Cloud & Veeam Availability Suite 9.5 Update 4 which some of you keen eyed people may have noticed will include some information about the anticipated upcoming update 4.

For those of you who like your speed turned up to eleven then you may want to attend 9126-1 – Intel® Optane™ Memory Solutions. I would be remised if I didn’t mention my colleagues at Arrow with their 9112-2 – Arrow: From IoT to IT, Arrow Electronics Is Accelerating your Digital Transformation looking at how to deliver and scale an IT infrastructure to meet the challenges of deploying IoT solutions.

There are also the certification prep sessions and with the release of two new Hybrid cloud certifications by NetApp U recently sessions 1279-1 & 1280-1 will no doubt draw in a crowd, so if you are planning on having a go make sure to get these in your diary, and I may bump in to you there as my Hybrid cloud certification achieved at Insight two years ago is up for renewal.

Now whilst this list is my picks I would suggest you spend a bit of time ahead of going and populate your calendar with the topics you want to hear and do it sooner rather than later, so you can get on the list before the session fills up. Just remember though pretty much all sessions are repeated during the conference and spend some time at insight central as an hour there can be just as beneficial as a session but most of all enjoy yourself. I would strongly suggest you follow the A-team members on twitter for an up to the moment review of sessions and whether catching the second running is worth you amending your calendar; and before you start filling the comments section with “Duh – FlexGroup is a hugely scalable container of NAS storage that can grow to trillions of files and yottabytes of storage there is a session on the topic number 1255-2 FlexGroup: The Foundation of the Next generation NetApp scale-out NAS.

Expanding NetApp HCI

NetApp recently updated the version of their HCI deployment software to v1.31. This version contained several new features to help in deploying a NetApp HCI environment. It’s been several months since I initially deployed our demo kit, and I felt it was time to revisit this process and see what has changed.

One welcomed new feature is the removal of the reliance on having a DHCP server that covers both your 1Gbe management and 10/25Gbe data networks. Whist this is a nice idea to help you get up and running and is something easy to configure in the lab, having DHCP running within a production SAN is not exactly common practice. You could either set one up or spend time configuring static addresses, which could be time-consuming, especially if you had half a dozen or so blades.

The other new feature that caught my eye was the ability to use the NetApp Deployment Engine (NDE) to expand a NetApp HCI environment. As previously mentioned in an earlier post and video (here), adding a SolidFire storage node to an existing cluster is quite easy (in fact, it was a design methodology when they created Element OS), but adding an ESXi node is quite a labour-intensive task. It is great to see that you can now add these quickly through a wizard.

To start the expand process, simply point your browser to the following:

https://storage_node_management_ip:442/scale/welcome
where you are greeted by the following landing page:

As you can see, it wants you to log into your environment. You may notice NetApp have updated the text box to show the password once typed as you can see from the eye icon at the end of the line.

To test this new methodology instead of buying more nodes, (which would have been nice) I removed both a single storage and compute node from their respective clusters and factory reset them. This allows me to test not only the addition of new nodes into existing clusters but also the removal of the DHCP or static IP addressing requirements before deployment.

Once logged in the NDE scale process discovers any and all nodes available and is where you can select which of these you would like to add to your environment.

After agreeing to the VMware EULA, you are asked to provide the VC’s details and then to select the datacentre and cluster you wish to add the node to. These steps are only present if you are adding compute nodes.

After giving the compute node a root password, you are taken to the “Enter the IP and naming details” page.

Finally, NDE scale takes you on to a review screen as these three screenshots (headings fully expanded for visibility) show.

Once reviewed, click the blue “Add Nodes” button. This initialises the now familiar NDE process of setting up NetApp HCI that can be tracked via a progress screen.

The scaling process for the addition of one compute and one storage node took just under half an hour to complete. But the real benefit is the fact that this scaling wizard can set up the ESXi host plus networking and vSwitches as per NetApp HCI’s best practices whilst at the same time adding a storage node into the cluster. That isn’t the quickest thing to do manually, so having a process that does this for you speedily is a huge plus in NetApp’s favour especially if you have multiple hosts. It’s clear to see the influence that the SolidFire team had in this update, with the ease and speed in allowing customers the ability to expand their NetApp HCI environments with NDE scale. I look forward to the features that will be included in upcoming releases of NetApp HCI and if hyperconverged infrastructure is all about speed and scale then this update gives me both in spades.

Setting up FabricPool

Recently, I was lucky enough to get the chance to spend a bit of time configuring FabricPool on a NetApp AFF A300. FabricPool is a feature that was introduced with ONTAP 9.2 that gives you the ability to utilise an S3 bucket as an extension of an all-flash aggregate. It is categorised as a storage tier, but it also has some interesting features. You can add a storage bucket from either AWS’s S3 service or from NetApp’s StorageGRID Webscale (SGWS) content repository. An aggregate can only be connected to one bucket at a time, but one bucket can serve multiple aggregates. Just remember that once an aggregate is attached to an S3 bucket it cannot be detached.

This functionality doesn’t just work across the whole of the aggregate—it is more granularly configured, drawing from the heritage of technologies like Flash Cache and Flash Pool. You assign a policy to each volume on how it utilises this new feature. A volume can have one of three policies: Snapshot-only, which is the default, allows cold data to be tiered off of the performance tier (flash) to the capacity tier (S3); None, where no data is tiered; or Backup, which transfers all the user data within a data protection volume to the bucket. Cold data is user data within the snapshot copy that hasn’t existed within the active file system for more than 48 hours. A volume can have its storage tier policy changed at any time when it exists within a FabricPool aggregate, and you can assign a policy to a volume that is being moved into a FabricPool aggregate (if you don’t want the default).

AFF systems come with a 10TB FabricPool license for using AWS S3. Additional capacity can be purchased as required and applied to all nodes within cluster. If you want to use SGWS, no license is required. With this release, there are also some limitations as to what features and functionality you can use in conjunction with FabricPool. FlexArray, FlexGroup, MetroCluster, SnapLock, ONTAP Select, SyncMirror, SVM DR, Infinite Volumes, NDMP SMTape or dump backups, and the Auto Balance functionality are not supported.

FabricPool Setup

There is some pre-deployment work that needs to be done in AWS to enable FabricPool to tier to an AWS S3 bucket.

First, set up the S3 bucket.

Next, set up a user account that can connect to the bucket.

Make sure to save the credentials, otherwise you will need to create another account as the password cannot be obtained again.

Finally, make sure you have set up an intercluster LIF on a 10GbE port for the AFF to communicate to the cloud.

Now, it’s FabricPool time!

Install the NetApp License File (NLF) required to allow FabricPool to utilise AWS.

Now you’ll do the actual configuration of FabricPool. This is done on the aggregate via the Storage Tiers sub menu item from the ONTAP 9.3 System Manager as shown below. Click Add External Capacity Tier.

Next, you need to populate the fields relating to the S3 bucket with the ID key and bucket name as per the setup above.

Set up the volumes if required. As you can see, the default of Snapshot-Only is active on the four volumes. You could (if you wanted) select the individual or a group of volumes that you wanted to alter the policy on in a single bulk operation via the dropdown button on top of the volumes table.

Hit Save. If your routes to the outside world are configured correctly, then you are finished!

You will probably want to monitor the space savings and tiering, and you can see from this image that the external capacity tier is showing up under Add-on Features Enabled (as this is just after setup, the information is still populating).

There you have it! You have successfully added a capacity tier to an AFF system. If the aggregate was over 50% full (otherwise why would you want to tier it off?), after 48 hours of no activity on snapshot data, it will start to filter out to the cloud. I have shown the steps here via the System Manager GUI, but it is also possible to complete this process via the CLI and probably even via API calls, but I have yet to look in to this.

One thing to note is that whilst this is a great way to get more out of an AFF investment, this is a tiering process, and your data should also be backed up as the metadata stays on the performance tier (remember the 3-2-1 rule). So, when you are next proposing an AFF or an all flash aggregate on a 9.2 or above ONTAP cluster; then consider using this pretty neat feature to get even more capacity out of your storage system or what I like to now call your data fabric platform.

Time it’s on your side

We all seem short on time these days. We have conference calls and video chats to save us travel time when we can. We use TLAs (three letter acronyms) whenever possible. We are forever on the hunt for the next “life hack” or “time saver”.

NetApp Insight is getting closer, and if you’re planning on attending, hopefully you’ve already started mapping out your schedule, but if you haven’t then fear not. As an IT professional, your time is extremely valuable. Time is precious to myself and to my employer and you both want to get the most out of a day but with Insight 2017 stacked with so many great sessions this year, how can you choose?

Whilst everyone’s interests are different, I thought I’d give my pick for the sessions that I’m looking forward to at Insight Las Vegas. Whether you’re a first timer or an old guard Insight veteran, I hope this will help you be smart with your time or as the Stones put it, “time is on your side.”

13145-1 – Data Privacy: Addressing the New Challenges Facing Businesses in GDPR, Data Privacy and Sovereignty – Sheila FitzPatrick. GDPR is a critical challenge that affects companies all over the world, not just in Europe. I have heard Sheila FitzPatrick speak on this topic several times, and every time, I leave with some really useful info about how to help customers move towards legal compliance with the imminent deadline looming (May 25, 2018). This session will help you elevate the conversation around GDPR with details about how to help your business avoid those hefty fines.

16365-2 – First-Generation HCI versus NetApp HCI: Tradeoffs, Gaps and Pitfalls
Gabriel Chapman
.
HCI is definitely going to be the hot topic at this year’s Insight and with SeekingAlpha highlighting NetApp as one of the likely winners in this space. Here we have an opportunity to hear from Gabe who has spoken at Tech Field Days in the past with great passion
on the topic and has been working hard with the SolidFire team to craft this solution. This session will highlight the advantages of this solution over traditional HCI offerings and their limitations, as well as why it will appeal to those who see a benefit in next generation infrastructure.

16594-2 – Accelerate Unstructured Data with FlexGroups: The Next Evolution of Scale-Out NAS – Justin Parisi. For those of you who haven’t heard the Tech ONTAP podcast (what a shame!), this is a session presented by one of its hosts and will give you an idea of the great content it puts out. During the session, Justin Parisi looks at why FlexGroups are winning in the unstructured data space and how it improves upon the FlexVol. Just don’t ask him about SAN…

12708-2 – How NVMe and Storage-Class Memory Are Reshaping the Storage Industry – Jeff Baxter and Quinn Summers. These are two very knowledgeable presenters who deliver information rich content and I’m happy to see them giving a session together. This session looks at NVMe which NetApp is currently leading the field in capacity delivered to its customers and Storage-Class memory and how these technologies will affect data centre design and application deployments in the near future. For those wanting to keep at the forefront of technology advancements and were unable to get to the Flash Memory Summit this is the session for you.

16700-2 – FabricPool in the Real World: Configurations and Best Practices – John Lantz. FabricPool was one of the key features of the 9.2 payload and when it announced at last year’s insight general session was a mike drop moment. Now with the availability of the required ONTAP version this is an excellent way to hear how best to put that into practice and who better than John delve into the core part of this technology, design considerations and walk you through how to deploy one of the more fascinating parts of the data fabric.

18342-1 – BOF: Ask the A-Team – Next Generation Data Centre – Mark Carlton. I would be remiss if I didn’t call this out as a session of note (and yes, centre IS spelt with an R-E). This is a “birds of a feather,” session which means it’s more of an open conversation or Q&A rather than a lecture. Hosted by Mark Carlton with several members of the A-Team on hand to provide honest opinions, feedback, and tales from the field with the Next Generation Data Centre, you should leave this session with a greater understanding of how to make the move to NGDC.

18442-2 – Simplify Sizing, Deployment and Management of End-User Computing with NetApp HCIChris Gebhardt Another session covering this year’s H O T topic. In this breakout Chris will go in to what you need to know to have a successful deployment of NetApp’s First-generation Enterprise HCI offering. This is likely to be a popular session so make sure you book early.

17349-2 – Converged Systems Advisor: Simplify Operations with Cloud-Based Lifecycle Management for FlexPodWyatt Bennett and Keith Barto. Emerging from a recent acquisition by NetApp is this superb piece of software that allows you to graphically explore the configuration of a Flexpod against a CVD and make sure that you are correctly configured. If you have anything to do with Flexpod this is probably one of the more interesting developments in that area of the portfolio this year and at this session you can hear from 2 of the people who have been building this product for several years and gain a better understanding as to how it can benefit your deployments.

18509-2 – VMware Plugins Unify NetApp Plugins into a Single Appliance – Steven Cortez. With the recent update of the plugin for vSphere, here is your one stop for a good look at what has changed with Steven Cortez. Backup can seem like a beast of burden, but it needn’t be when you look at this offering and see what this new plugin can provide, whether that be over the old VSC dashboard, improvements to VASA integration and SRA functionality, or even VVol support. In this session, Steven will cover the more popular workflows within the unified plugin.

17930-3 – Virtual Volumes Deep Dive with NetApp SolidFireAndy Banta. Andy will be telling you why you want to flip the switch and move from traditional datastores to VVols and all the benefits and loveliness that comes with implementation a next generation VM deployment. Some of the conference attendees may feel they know ONTAP like the back of my hand but maybe this is the year to give SolidFire some serious focus and this is one session that will show you why.

26420-2 – Hybrid Cloud Case Studies – Scott Gelb. Come to this session to hear Scotty Gelb‘s top reasons for why you should embrace and implement a hybrid cloud strategy to the benefit of your company and customers. Based on customer experience, in this breakout, he will cover the considerations needed for a successful deployment and how to migrate your data to the cloud.

It’s also worth noting that whilst the sessions are the real meat on the bone for the conference (and you do get access to the content after the event), there’s lots more to do! The general sessions are always enlightening, and I look forward to what George Kurian will have to say. Then there’s the ability to give honest feedback directly to the PMs. Get your certs up to date (these have all been updated since Insight Berlin 2016) or spend some time in the hands-on labs. The Dev Ops café was also a hit last year. The list goes on and on.

The best advice I can give for attending is to do your homework and plan what you want to get out of the conference. Plan for lunch. Plan for some downtime during the day. Plan for a “working from home” day after the conference to get caught up, as you will no doubt be shattered. Maybe even plan to have a go at tumbling dice whilst in a casino. Plan for new friends and new faces, and most of all, plan to have a good time, because before you know it, you’ll be singing “It’s all over now.”


Getting to grips with SolidFire

We’ve had Nike MAGs, Pepsi Max and Hover boards now we look to the data centre of the future

I have been doing more and more with SolidFire over the last few months, and I’ve had somewhat of a revelation about it. Around this time last year, I thought there was too much overlap with the FAS wing of the portfolio for NetApp to be pursuing an acquisition. To the uninformed, this may look true on paper, but it is completely different in practice. The more I learn about SolidFire, the more I am impressed by the decisions NetApp has made and the direction they are heading.

Hopefully you are aware of all the great benefits of using a SolidFire cluster within your environment, but for those of you who aren’t, I’ll sum it up in one word—predictable. This predictability extends to all features of the architecture including capacity, performance, overall health and healing, and scalability.

An initial 4 node SolidFire deployment

Let’s have a look at performance first. Starting with four nodes, you have 200K IOPS available. By adding more nodes to this cluster, you can grow predictably at 50k per node*. And that’s not even the best part. The real showstopper is SolidFire’s ability to provide you with precisely the IOPS your workload requires by assigning a policy to each volume you create. If you undertake this task via the GUI, it’s a set of three boxes that sit in the bottom half of the creation wizard asking you what your minimum, maximum, and burst requirements for this volume are. These three little text boxes are unobtrusive and easy to overlook, but they have a huge impact on what happens within your environment. By setting the minimum field, you are effectively guaranteeing the quality of service that volume gets. Think about it, “guaranteed QOS,” (gQOS, if you like). That little g added to an acronym we have used for years is a small appendage with massive importance.

Volume Creation wizard

Most other vendors in the IT industry will say that the use of QOS is merely a Band-Aid — a reactive measure—until you can fix the issue that has caused a workload to be starved or bullied. This requires you to carry out some manual intervention, not to mention the repercussions of you letting things escalate to that point where they have already had a negative impact on the business.

We need to change from this reactive methodology. Let’s start by lifting the term “quality of service” out of its drab connotations, give it a coiffured beard, skinny jeans, and a double macchiato. Let’s add a “g” to this aging acronym and turn that hipster loose on the world. gQOS is the millennial in the workplace, delivering a twenty-first-century impact on the tasks and procedures that have been stuck in a rut for years. When you hear someone use QOS ask, “Don’t you mean gQOS?” Then walk away in disgust when they look at you blankly.

With SolidFire you are able to allocate performance independent of capacity in real-time without impacting other workloads. What does this mean you may ask? No more noisy neighbours influencing the rest of the system. gQOS addresses the issue of shared resources and allows you to provide fool-proof SLAs back to the business something sought by those Enterprise organisations looking to undergo a transformational change and Service Providers with hundreds of customers on a single shared platform.

gQOS in action

So let’s start positively promoting gQOS because if it’s not guaranteed can we really call it quality? If I was in the tagline-writing business, this area of the NetApp portfolio would read something like “SolidFire Predictability Guaranteed.”

*The SF19210 adds 100K per node.

Grays Sports Almanac image courtesy of Firebox.com

Painting a Vanilla Sky

Expanding the NetApp Hybrid Cloud

During the first general session at NetApp Insight 2016 in Las Vegas, George Kurian, CEO (and a fascinating person to listen to), stated that “NetApp are the fastest growing SAN vendor and are also the fastest growing all-flash array vendor.” This is superb news for any hardware company, but for NetApp, this isn’t enough. He is currently leading the company’s transformation into one that serves you, the customer, in this new era of IT while addressing how you want to buy and consume IT. NetApp are addressing this with the Data Fabric.

If you need a better understanding of the Data Fabric, I would strongly suggest you look at this great two-part post from @TechStringy (part 1 here and part 2 here).

Back in 2001, Cameron Crowe released a film starring Tom Cruise called “Vanilla Sky.” In this, the main protagonist suffers a series of unfortunate events and rather than face up to them, he decides to have himself put in stasis until those problems could be resolved. Well, if managing data within varying cloud scenarios was his problem, then announcements made by NetApp earlier this week would mean he could be brought back and stop avoiding the issues. So let’s take a look at some of what was announced:

NetApp Cloud Sync: This is a service offering that moves and continuously syncs data between on-prem and S3 cloud storage. For those of you who attended this year’s Insight in Las Vegas, this was the intriguing demo given by Joe CaraDonna illustrating how NASA is interacting with the Mars Rover Curiosity. Joe showed how information flows back to Earth via “JPL … the hub of mankind’s only intergalactic network,” all in an automated, validated, and predictably-secure manner and how they can realise great value from that data. Cloud Sync not only allows you to move huge amounts of data quickly into the cloud, but it also gives you the ability to utilise the elastic compute of AWS, which is great if you are looking to carry out some CPU-intensive workloads like Map Reduce. If you are interested in what you have read or seen so far, head over to the here where you can now and take advantage of the 30-day free trial.

Data Fabric Solution for Cloud Backup (ONTAP to AltaVault to Cloud): For those of you who saw the presentation at Insight 2015, this is the backing up of FAS via AltaVault using Snap Center. This interaction of portfolio items gives us the ability to provide end-to-end backups of NAS data while enabling single-file restores via the snapshot catalogue function. This service has a tonne of built-in policies to choose from—simply drag and drop items to get it configured. AltaVault also now has the ability to help with seeding of your backup via the use of an AWS Snowball device (or up to ten daisy-chained together as a single seeding target) it’s never been easier to get your data into and manage in the cloud.

NetApp Cloud Control for Microsoft Office 365: This tool extends data protection, security, and compliance to your Office 365 environment to protect you from cyber-attacks and breaches in the cloud. It allows you to back up your Exchange SharePoint and OneDrive for business and vault a copy to another location, which could be an on-prem, nearby, or cloud environment, depending on your disaster recovery and business continuity policies. This is a great extension of the Data Fabric message, as we can now utilise FAS, and or ONTAP Cloud, and or AltaVault, and StorageGRID as backup targets for production environments running wherever you deem appropriate for that point in time.

NetApp Private Storage for Cloud: For customers that are after an OPEX model and see the previous NetApp Private Storage route as an inhibitor to this (due to the fact that they need to source everything themselves), this is where NPS-as-a-Service comes into its own. It gives customers the ability to approach a single source and acquire what they need to provide an NPS resource back to their company. A solution offering for NPS for Cloud is currently offered by Arrow ECS in the U.S. and is coming to Europe soon. This offering helps you create a mesh between storage systems and various clouds, giving you the ability to control where your data resides while providing the level of performance you want to the cloud compute of your choice.

ONTAP Cloud for Microsoft Azure: This is the second software-only data management IaaS offering for hyper-scalers being added to the NetApp portfolio. ONTAP Cloud gives customers the ability to apply all that lovely data management functionality that has drawn people to NetApp FAS for years layered on top of blob storage from your cloud provider. You get the great storage efficiencies and multi-protocol support with the ease of “drag and drop,” and you can manage replication to and from this software-defined storage appliance with the ability to encrypt the data whilst it resides in the cloud. This service has a variety of use cases, from providing software development or production with storage controls to utilizing it as a disaster recovery entity.

So if we are to look at an overview of the data Fabric now we can see that ability to move data around dependant on business requirements.

During his presentation at Insight 2016 George Kurian also said, “Every one of NetApp’s competitors is constructing the next data silo, or prison, from which data cannot escape.” Hopefully by Implementing the Data Fabric NetApp customers can build with confidence the IT business model which facilities a flow of information within their organisation so that can grow and adapt to meet their ever-changing IT needs.

The Data Fabric is the data management architecture for the next era of IT, and NetApp intend to lead that era. With this recent enhancement of the Data Fabric and NetApp’s portfolio, there is no more need to be shouting “Tech Support!” Instead, we can all be Monet and paint a beautiful Vanilla Sky.

Hindsight from Insight

NetApp Insight Las Vegas 2016 Roundup

I was lucky enough to get to go to Las Vegas with the NetApp A-Team and attend the NetApp Insight Americas and APAC conference. I have attended Insight EMEA many times, but this was my first time attending it on US soil

I would be remiss if I did not mention that both the Vegas and Berlin events have the same number of high-quality breakout sessions. As expected, the majority of the sessions that were offered in Vegas are re-offered in Berlin. The organisation of the conference is the same, with things like Insight Central consisting of NetApp partners and vendor showcases. From that standpoint, it felt like I could very well have been at the EMEA conference. There is also a high number of NetApp technical employees on hand to debate different deployment methodologies which is a great reason in its self to attend.

However, Vegas did seem a lot more relaxed, and with over twice as many attendees there, it somehow felt quieter due to the size of the conference centre. There’s also a lot more going on in the evenings, (even just within the Mandalay Bay Hotel, never mind the rest of Vegas) with lots of opportunities for delegates to mingle and converse.

At this year’s conference, NetApp announced 16 new products! This is a huge amount for any company, and I think it just goes to show how NetApp are trying to stay at the leading edge of the storage industry. There were disk shelves and controllers announced, and if you would like to know more about the new controllers, see my previous post here. There was also an update to ONTAP Select as well as the arrival of ONTAP Cloud for Azure, all made possible by the release of ONTAP 9.1. There was a lot of messaging in both the general sessions and in the breakouts geared towards DevOps and this new way of deploying applications either on premises or in the cloud.

This year we also had the joy of SolidFire joining in, and with a raft of sessions available, this technology did prove popular. The two-hour deep dive by Andy Roberts was the third-most attended session of the conference, and the SolidFire Hands-On Lab was the third-most requested. They also announced the integration of SolidFire into FlexPod, which my A-Team colleague Melissa Wright (@vmiss33) coined the “DevOps workhorse.” It is a perfect tag line, and one I am going to start to use.

NetApp Insight also gives you the opportunity to take NetApp certification exams, so I thought I should try some. I passed two exams whilst there: the updated FlexPod design NS0-0170 and the new Hybrid Cloud NS0-0146, which give me the NCSA accreditation. These came with some lovely luggage tags courtesy of Liz Burns from NetApp University, to  add to the certificates which I already held. This is a great way to provide value back to your employer for attending if you need a stronger reason to attend. It’s best to book your exam before you get there as it can be very busy, and you may have to wait around for a while for a walk-in appointment.

A nice colourful collection

If you are new to SolidFire and want to understand how it’s managed, the two-hour deep dive mentioned earlier is a great place to start. It’s a great mix of slideware and demonstration on how to configure various key features of the Element OS. I would also recommend Val Bercovici’s (@valb00) “Why DevOps will move to the ‘lean’ cloud” break out. This session will help you understand the shift in application development and what you can do to try and keep pace and remain relevant.

NetApp now seem to be pivoting towards messaging that helps the developer and the DevOps team, providing products and tools that will integrate into their style of working as we have seen over the years with NetApp. Below is the link to the scenario covered in the General session on the third day. I think it provides good insight into how the pace of application development is changing, the tools that this new breed of developer is adopting and using, and also how NetApp is taking this methodology seriously (as evidenced by the fact that they have a site with a host of tools and scripts aimed purely at DevOps). Also embedded in the picture below is the link to the scenario acted out on stage during the general session.

I would also recommend looking into the sessions on VMware’s vVols functionality. They’re a great primer on this area of VMware’s evolving portfolio and they also show how NetApp can utilise this ever-improving technology. Andy Banta (@andybanta, who wrote an insightful blog on the topic and appeared on GreyBeards on Storage Ep. 36) and Josh (‘the intern’) Atwell (@Josh_Atwell) gave a joint session on how SolidFire differs from conventional storage arrays in their implementation and how best to utilise policy-based storage with SolidFire. Then there was Andreas Engel from NetApp and Pete Flecha (@vPedroArrow) from VMware who provided a deploy, implement, and troubleshoot session which was almost as popular as Pete’s session at VMworld. It illustrated some handy tips, tricks, and gotchas that a lot of the audience then took with them as they headed to the Hands-on Labs to get up to speed with vVols.  I would also keep an eye out for the Inform and Delight sessions, including a great one by Veeam on “Closing the Door on the Data Center Availability Gap.” And let’s not forget the “Dave and Dave show,” which is a must-see attraction.

Also in Vegas this year attending NetApp Insight for the first time was vBrown bag. Their online presence has been helping IT professionals become more proficient with virtualisation for the past six years and is a must point of call for anyone chasing a VCP or other certification, due to the wealth of knowledge on their site. They were there to expand their ever-increasing field of topics, and one of the presentations recorded was Sam Moulton (@SamMoulton), Champion of the NetApp A-Team(@NetAppATeam) with A-Team member Trey Davis (@ntap_seal), Senior Consultant from iVision in Atlanta providing some insight into the NetApp A-Team and what we do. This short discussion (embedded within picture) will hopefully help people understand the team better and where we fit within the ecosystem.

For more information on the A-Team’s presence in Las Vegas this year, check out the session called “Birds of a Feather: Walk the line with the A-Team” which is hopefully on the site for review. There will be a strong presence in Berlin, so come up and talk to us or send us a tweet.

One of the highlights during the opening of the third general session was the reel put together from the carpool Karaoke. I would urge you to have a look and a laugh.

This was a great conference, with a phenomenal amount of superb content, too much to take on board in the four days but I will enjoy reviewing over the next few weeks. I am thankful to my employer for letting me attend and I now feel invigorated and more confident to go out and have discussions and point out why customers should be looking at NetApp for their cloud, hybrid, and on-premises storage needs. If you are heading to Berlin, then I will hopefully see you there.