Making time for Insight

With Insight US just a few weeks away and as I have still to complete my session calendar for the event, I thought this would be a good time to highlight those sessions that have stood out for me in the hope that it may help you in making a decision. With 307 sessions 238 speakers and 45 exhibitors how do you distil this down to give you something manageable and meaningful?

Now whilst I normally spend more time on a particular track, it is worth asking yourself “What am I going to take home from this conference?” Now are you here to just get as much information as possible or are you here to get skilled up on a particular topic, either for an upcoming project or to break into a new area of business. This is probably something you should decide before you start going hell for leather and filling your calendar with random topics like FlexGroup (is that even a thing?).

On my first pass over the catalogue I had 38 interests which is way too many for even your most hardcore conference attendee to attend so some culling will need to be done. One thing that does bother me and probably every conference attendee, are the time slots where you have 10 interests happening at the same time and the 2 hours prior you have a big blank hole. Thankfully vendors have started to record sessions at conferences for that very reason so for those that you cannot make there is always the ability to review at some other point plus some sessions just hit you with way too much to take in so you may need to hear them a second time.

Cloud Volumes is probably going to be the hot topic this year and with 50 sessions to choose from there’s plenty on offer. The first thing I would suggest is to verify its either a Cloud Volumes ONTAP (formerly known as ONTAP Cloud) or whether it’s a Cloud Volumes Service (NFSaaS) session you have picked and attending so you get the correct information. I’m sure there will be a few people that will get this wrong this year and you don’t want to be one of them.

1228-2 Designing and deploying a Hybrid Cloud with NetApp and VMware Cloud on AWS presented by Chris Gebhardt and Glenn Sizemore is I’m sure to be a popular session and it will hopefully build on the session NetApp gave at Tech Field Day at VMworld US last month.

1261-2 NetApp Cloud Volumes Service Technical Deep Dive presented by Will Stowe is probably going to be one of those sessions people leave and mention to others you need to see. With a huge potential Cloud Volume Service will become integral into many customers data fabric over the coming year So I’d advise getting skilled up on this as soon as you can.

If you are new to all things cloud and wondering where might be a good place to start, then schedule 4117-1 Cloud Volumes Service and 4118-1 Cloud Volumes ONTAP sessions in the Data Visionary theatre at Insight Central to give you a good idea on these two technologies.

Another product name change Cloud Control is out and been rebranded NetApp SaaS Backup; but there is a lot more this SaaS suite offers so with that piece of knowledge there is a session on One Stop Backup for SalesForce 1121-2 and then head on over to 1188-2 for NetApp SaaS Backup for Office 365 to complete the picture.

With Security being a major focus in the IT industry as a whole there are several sessions of note on this subject. 1234-2 – Data Security at NetApp: An Overview of the NetApp Portfolio of Security Solutions by Juan Mojica would be an excellent place to start if you haven’t thought about how begin with such a huge undertaking.

You may want to follow that up with 1103-2 – Securing and Hardening NetApp ONTAP 9 with Andrae Middleton. Remember security teams need to get policies and procedures right 100% of the time, hackers need to only get it right once.

1214-2 What’s On Tap in the Next Major Release of NetApp ONTAP by surviving podcast host Justin Parisi (It has been a bit Hunger Games/Highlander on the tech ONTAP podcast recently) will no doubt fill up fast as any new OS payload draws in the crowd and the Q&A after that session may spill out into the halls.

1136-3 will also be popular as it covers the advancements made with SnapMirror covering best practices for both Flash and Cloud worlds.

It also looks like some of the sponsors have upped their game as well with some excellent sessions. Veeam for instance have 6 sessions to choose from which is great as they are now on the NetApp price book. 9107-2 – Veeam: Veeam Data Availability Deep Dive—Exploring Data Fabric Integrations presented by Michael Cade and Adam Bergh will highlight just some of the great reasons why Veeam have been added to the price book, and then head over to the hands-on labs as Veeam has made it into the Lab On Demand catalogue. Veeam also have a data exchange white board session 8102-1 – Veeam: Availability Outside the Datacenter: Public Cloud & Veeam Availability Suite 9.5 Update 4 which some of you keen eyed people may have noticed will include some information about the anticipated upcoming update 4.

For those of you who like your speed turned up to eleven then you may want to attend 9126-1 – Intel® Optane™ Memory Solutions. I would be remised if I didn’t mention my colleagues at Arrow with their 9112-2 – Arrow: From IoT to IT, Arrow Electronics Is Accelerating your Digital Transformation looking at how to deliver and scale an IT infrastructure to meet the challenges of deploying IoT solutions.

There are also the certification prep sessions and with the release of two new Hybrid cloud certifications by NetApp U recently sessions 1279-1 & 1280-1 will no doubt draw in a crowd, so if you are planning on having a go make sure to get these in your diary, and I may bump in to you there as my Hybrid cloud certification achieved at Insight two years ago is up for renewal.

Now whilst this list is my picks I would suggest you spend a bit of time ahead of going and populate your calendar with the topics you want to hear and do it sooner rather than later, so you can get on the list before the session fills up. Just remember though pretty much all sessions are repeated during the conference and spend some time at insight central as an hour there can be just as beneficial as a session but most of all enjoy yourself. I would strongly suggest you follow the A-team members on twitter for an up to the moment review of sessions and whether catching the second running is worth you amending your calendar; and before you start filling the comments section with “Duh – FlexGroup is a hugely scalable container of NAS storage that can grow to trillions of files and yottabytes of storage there is a session on the topic number 1255-2 FlexGroup: The Foundation of the Next generation NetApp scale-out NAS.

Advertisements

VMC NetApp Storage

Last week at VMworld, NetApp announced a new partnership offering with VMware whereby VMware Cloud on AWS (VMC) would be able to utilise NetApp Cloud Volumes Service. Currently in tech preview, let’s take a look at these two technologies and see how they can work together.

VMware Cloud on AWS

Firstly, let’s review the VMware cloud offering. The ability to run vSphere virtualised machines on AWS hardware was announced at VMworld 2017 and was met with great approval. The ability to have both your on-premises and public cloud offerings with the same abilities and look and feel was heralded as a lower entry point for those customers who were struggling with utilising the public cloud. The VMware Cloud Foundation suite (vSphere, vCenter, vSAN, and NSX) running on AWS EC2 infrastructure is now available, but it is sold, delivered, and supported by VMware.

There are several advantages with this:

  • Seamless portability of workloads from on-premises datacentres to the cloud
  • Operation consistency between on-premises and the cloud
  • The ability to access other native AWS services, not to mention the fact that AWS data centres appear around the globe
  • On-demand flexibility of being able to run in the cloud

With VMware running the suite themselves rather than informing customers how to deploy, set up, and run it, a customer could be ordering and utilising a new vSphere offering within an hour. With VMC, the customer has the choice of where to run their workload, with the flexibility to migrate it back and forth between their private data centre and AWS with ease.

Cloud Volumes Service

When NetApp moved into the cloud market several years ago, their first offering was the ability to run a fully-functioning ONTAP virtual appliance on AWS (later available on Azure). This offering, originally called Cloud ONTAP then ONTAP Cloud and more recently renamed Cloud Volumes ONTAP (CVO), is a cloud instance you spin up, set up, and manage like a physical box, with all the features you have come to love on that physical box, whether that be storage efficiencies, FlexClone, SnapMirror, or multi-protocol access. It was all baked in there for a customer to turn on and use.

More recently, NetApp has launched Cloud Volume Service (CVS). This service is sold, operated, and supported by NetApp, providing on-demand capacity and flexible consumption, with a mount point and the ability to take snapshots. It is available for AWS, Azure, and the Google Cloud Platform. The idea behind Cloud Volumes Service is simple: you let NetApp manage the storage, so you can concentrate on getting your product to market faster. Cloud Volumes Service gives you the file-level access to capacity required with a given service level in seconds. It also comes with the ability to clone quickly and replicate cross-region if required whilst providing always-on encryption at rest. That’s why over 300,000 people use NetApp Cloud Volumes Service already.

There are three available service levels: Standard, Premium, and Extreme with ranging performance of 16, 64, or 128KB per quota GB (these are levels, not guarantees).

(Example pricing as of 10 July 18) https://docs.netapp.com/us-en/cloud_volumes/aws/reference_selecting_service_level_and_quota.html

With the three different performance levels at varying capacities, you can mix and match to meet your requirements. For example, let’s say your application requires 12 TB of capacity and 800 MB/s of peak bandwidth. Although the Extreme service level can meet the demands of the application at the 12 TB mark, it is more cost-effective to select 13 TB at the Premium service level.


Partnership

Let’s take a look at the options that we now have. We have NetApp Private Storage (NPS), where a customer owns, manages, and supports a FAS system in a datacentre connected to AWS via a dedicated Direct Connect. We have the ability to deploy an instance of Cloud Volumes ONTAP from the AWS marketplace which the customer manages and connects to the infrastructure via an elastic network interface (ENI). Or we have the Cloud Volumes Service provided and managed by NetApp, connected to AWS via a shared Direct Connect. All three of these can be utilised to connect to VMC on AWS. These currently supported configurations have the guest connected using iSCSI, NFS, and/or SMB via Cloud Volumes Service, Cloud Volumes ONTAP, and NPS.

This current use case available to all is where the Guest OS would access storage via iSCSI, SMB, and or NFS using CVO. With no ingress or egress charges within the same availability zone and the ability to use the Cloud Volumes ONTAP data management capabilities, this is a very attractive offering to many customers. But what if you wanted to take that further than just the application layer? This is what was announced last week.

This announcement is for a tech preview of datastore support via NFS with Cloud Volumes Service. This is a big move. Up to this point, datastores were provided via VMware’s own technology, vSAN. By using CVS with VMC, you are gaining the ability to manage both the compute and the storage as if it were on the premises, not where it exists in the cloud.

As you can see, Cloud Volumes Service is supplying an NFS v3 mount to the VMC environment.

As this is an NFS mount from an ONTAP environment with no extra configuration, you can gain access to the snapshot directory.

Moving forward, VMC will be able to access NetApp Private Storage to provide NFS datastores, allowing customers to keep ownership of their data whilst also allowing them to meet any regulatory requirements. In the future, Cloud Volumes ONTAP will be able to provide NFS datastores to a VMC environment. There are several major use cases for cloud in general, and VMC with Cloud Volumes provides increased functionality to all these areas, whether that be disaster recovery, cloud burst, etc. The ability to provide NFS and SMB access with independent storage scale backed by ONTAP is a very strong message.

If you are considering VMC, this is a strong reason to look at Cloud Volumes to supply your datastores and decouple their persistent storage requirements from their cloud consumption requirements or exceed what vSAN can do.

Setting up FabricPool

Recently, I was lucky enough to get the chance to spend a bit of time configuring FabricPool on a NetApp AFF A300. FabricPool is a feature that was introduced with ONTAP 9.2 that gives you the ability to utilise an S3 bucket as an extension of an all-flash aggregate. It is categorised as a storage tier, but it also has some interesting features. You can add a storage bucket from either AWS’s S3 service or from NetApp’s StorageGRID Webscale (SGWS) content repository. An aggregate can only be connected to one bucket at a time, but one bucket can serve multiple aggregates. Just remember that once an aggregate is attached to an S3 bucket it cannot be detached.

This functionality doesn’t just work across the whole of the aggregate—it is more granularly configured, drawing from the heritage of technologies like Flash Cache and Flash Pool. You assign a policy to each volume on how it utilises this new feature. A volume can have one of three policies: Snapshot-only, which is the default, allows cold data to be tiered off of the performance tier (flash) to the capacity tier (S3); None, where no data is tiered; or Backup, which transfers all the user data within a data protection volume to the bucket. Cold data is user data within the snapshot copy that hasn’t existed within the active file system for more than 48 hours. A volume can have its storage tier policy changed at any time when it exists within a FabricPool aggregate, and you can assign a policy to a volume that is being moved into a FabricPool aggregate (if you don’t want the default).

AFF systems come with a 10TB FabricPool license for using AWS S3. Additional capacity can be purchased as required and applied to all nodes within cluster. If you want to use SGWS, no license is required. With this release, there are also some limitations as to what features and functionality you can use in conjunction with FabricPool. FlexArray, FlexGroup, MetroCluster, SnapLock, ONTAP Select, SyncMirror, SVM DR, Infinite Volumes, NDMP SMTape or dump backups, and the Auto Balance functionality are not supported.

FabricPool Setup

There is some pre-deployment work that needs to be done in AWS to enable FabricPool to tier to an AWS S3 bucket.

First, set up the S3 bucket.

Next, set up a user account that can connect to the bucket.

Make sure to save the credentials, otherwise you will need to create another account as the password cannot be obtained again.

Finally, make sure you have set up an intercluster LIF on a 10GbE port for the AFF to communicate to the cloud.

Now, it’s FabricPool time!

Install the NetApp License File (NLF) required to allow FabricPool to utilise AWS.

Now you’ll do the actual configuration of FabricPool. This is done on the aggregate via the Storage Tiers sub menu item from the ONTAP 9.3 System Manager as shown below. Click Add External Capacity Tier.

Next, you need to populate the fields relating to the S3 bucket with the ID key and bucket name as per the setup above.

Set up the volumes if required. As you can see, the default of Snapshot-Only is active on the four volumes. You could (if you wanted) select the individual or a group of volumes that you wanted to alter the policy on in a single bulk operation via the dropdown button on top of the volumes table.

Hit Save. If your routes to the outside world are configured correctly, then you are finished!

You will probably want to monitor the space savings and tiering, and you can see from this image that the external capacity tier is showing up under Add-on Features Enabled (as this is just after setup, the information is still populating).

There you have it! You have successfully added a capacity tier to an AFF system. If the aggregate was over 50% full (otherwise why would you want to tier it off?), after 48 hours of no activity on snapshot data, it will start to filter out to the cloud. I have shown the steps here via the System Manager GUI, but it is also possible to complete this process via the CLI and probably even via API calls, but I have yet to look in to this.

One thing to note is that whilst this is a great way to get more out of an AFF investment, this is a tiering process, and your data should also be backed up as the metadata stays on the performance tier (remember the 3-2-1 rule). So, when you are next proposing an AFF or an all flash aggregate on a 9.2 or above ONTAP cluster; then consider using this pretty neat feature to get even more capacity out of your storage system or what I like to now call your data fabric platform.

25 hours at CLEUR

Last week I was lucky enough to get a chance to attend Cisco Live in Berlin. I have been at this venue before but this is my first Cisco event and I have to say I was impressed. Hosted at the Berlin Messe it didn’t feel overly crowded yet with over 12,000 people involved in the conference it barely used up a third of the 26 halls available for events there. My reason for attending was a FlexPod round-table to be hosted jointly by people from NetApp and Cisco. I was in attendance as the voice of Arrow ECS Europe and, as the UK distributor involved in the most FlexPods, I thought it was important not only to give my feedback at this event but also to hear the messaging coming out directly from the vendors and pass this back to our reseller partners in the UK and also back to Arrow ECS.

file-28-02-2017-16-31-29

Sadly no AAA

I attended the conference in both a virtual and physical capacity; virtually reviewing the content available from keynotes and also by being there on an Explorer pass. This basically got me to everything but breakout sessions. Being immersed in the Cisco community was a refreshing experience and one I would recommend. Even without attending sessions there is a huge amount of information available to gather, not just from Cisco but also from some of their strategic partners including and not limited to Veeam, F5 and Citrix.

At the round-table it was great to hear was the rate of growth from a FlexPod perspective. A partnership just over five years old and it’s an over $7 billion business, and the number 1 integrated infrastructure. It also great to see that they are not resting on their laurels with a new CVD released that week covering how to deploy FlexPod Datacenter with Docker Datacenter for Container Management and with more in the pipeline narrowing the gap between private and hybrid clouds I would have to say that this is a partnership with plenty left in the tank.

file-28-02-2017-16-36-07

Ready for the DevOps community

I swung by the NetApp stand after and heard about another exciting FlexPod project the All Flash 3D FlexPod, and for anyone who attended the UK partner academy last June might recall a presentation on an older version of this project. We often talk about FlexPod being more than a sum of all the constituent parts and this is one case where this statement truly shines. Being used in anything from the medical profession to 4k content creation to geological applications this is a true monster and I doubt we have really scratched the surface of the areas this solution could be applicable I would suggest checking out here for more information.

It was also great to see that a MetroCluster FlexPod was running the event, by swinging by the NOC you could see the statistics in real time like the 20GB (yes Gigabytes) of internet traffic flowing around the campus supported by 968 access points (they added 75 on Tue night to improve the experience) yet with everything going on the AFF8060 never really was taxed as seen on the Graphana dashboard.

file-28-02-2017-16-37-17

Monitoring one half of the NOC AFF FlexPod

What did hit me whilst wandering around the many halls was the vast plethora of Cisco products and how this company has evolved. I knew some of different areas they but there’s so much more than the routing and switching that the business was born out of. I talked to many interesting people covering varying business units from IOT to Digital Finance Services to business transformation. If I had had more time there there’s so much more I would have like to have done, like sat down for a few hours and run through a self-passed lab, attend one of the many of what looked like sessions in the DevOps area or even give 20 minutes of my time to their charitable cause to Rise Against Hunger. One thing that hit me was this was a company with employees understanding the companies vision, it’s like an 8 person coxed crew perfectly in time with one another lifting their stroke rate above 32 still creating perfect puddles and yet not breaking a sweat. The slogan for the event was Your Time Is Now and I have to say that we are definitely in a Cisco Era.

Painting a Vanilla Sky

Expanding the NetApp Hybrid Cloud

During the first general session at NetApp Insight 2016 in Las Vegas, George Kurian, CEO (and a fascinating person to listen to), stated that “NetApp are the fastest growing SAN vendor and are also the fastest growing all-flash array vendor.” This is superb news for any hardware company, but for NetApp, this isn’t enough. He is currently leading the company’s transformation into one that serves you, the customer, in this new era of IT while addressing how you want to buy and consume IT. NetApp are addressing this with the Data Fabric.

If you need a better understanding of the Data Fabric, I would strongly suggest you look at this great two-part post from @TechStringy (part 1 here and part 2 here).

Back in 2001, Cameron Crowe released a film starring Tom Cruise called “Vanilla Sky.” In this, the main protagonist suffers a series of unfortunate events and rather than face up to them, he decides to have himself put in stasis until those problems could be resolved. Well, if managing data within varying cloud scenarios was his problem, then announcements made by NetApp earlier this week would mean he could be brought back and stop avoiding the issues. So let’s take a look at some of what was announced:

NetApp Cloud Sync: This is a service offering that moves and continuously syncs data between on-prem and S3 cloud storage. For those of you who attended this year’s Insight in Las Vegas, this was the intriguing demo given by Joe CaraDonna illustrating how NASA is interacting with the Mars Rover Curiosity. Joe showed how information flows back to Earth via “JPL … the hub of mankind’s only intergalactic network,” all in an automated, validated, and predictably-secure manner and how they can realise great value from that data. Cloud Sync not only allows you to move huge amounts of data quickly into the cloud, but it also gives you the ability to utilise the elastic compute of AWS, which is great if you are looking to carry out some CPU-intensive workloads like Map Reduce. If you are interested in what you have read or seen so far, head over to the here where you can now and take advantage of the 30-day free trial.

Data Fabric Solution for Cloud Backup (ONTAP to AltaVault to Cloud): For those of you who saw the presentation at Insight 2015, this is the backing up of FAS via AltaVault using Snap Center. This interaction of portfolio items gives us the ability to provide end-to-end backups of NAS data while enabling single-file restores via the snapshot catalogue function. This service has a tonne of built-in policies to choose from—simply drag and drop items to get it configured. AltaVault also now has the ability to help with seeding of your backup via the use of an AWS Snowball device (or up to ten daisy-chained together as a single seeding target) it’s never been easier to get your data into and manage in the cloud.

NetApp Cloud Control for Microsoft Office 365: This tool extends data protection, security, and compliance to your Office 365 environment to protect you from cyber-attacks and breaches in the cloud. It allows you to back up your Exchange SharePoint and OneDrive for business and vault a copy to another location, which could be an on-prem, nearby, or cloud environment, depending on your disaster recovery and business continuity policies. This is a great extension of the Data Fabric message, as we can now utilise FAS, and or ONTAP Cloud, and or AltaVault, and StorageGRID as backup targets for production environments running wherever you deem appropriate for that point in time.

NetApp Private Storage for Cloud: For customers that are after an OPEX model and see the previous NetApp Private Storage route as an inhibitor to this (due to the fact that they need to source everything themselves), this is where NPS-as-a-Service comes into its own. It gives customers the ability to approach a single source and acquire what they need to provide an NPS resource back to their company. A solution offering for NPS for Cloud is currently offered by Arrow ECS in the U.S. and is coming to Europe soon. This offering helps you create a mesh between storage systems and various clouds, giving you the ability to control where your data resides while providing the level of performance you want to the cloud compute of your choice.

ONTAP Cloud for Microsoft Azure: This is the second software-only data management IaaS offering for hyper-scalers being added to the NetApp portfolio. ONTAP Cloud gives customers the ability to apply all that lovely data management functionality that has drawn people to NetApp FAS for years layered on top of blob storage from your cloud provider. You get the great storage efficiencies and multi-protocol support with the ease of “drag and drop,” and you can manage replication to and from this software-defined storage appliance with the ability to encrypt the data whilst it resides in the cloud. This service has a variety of use cases, from providing software development or production with storage controls to utilizing it as a disaster recovery entity.

So if we are to look at an overview of the data Fabric now we can see that ability to move data around dependant on business requirements.

During his presentation at Insight 2016 George Kurian also said, “Every one of NetApp’s competitors is constructing the next data silo, or prison, from which data cannot escape.” Hopefully by Implementing the Data Fabric NetApp customers can build with confidence the IT business model which facilities a flow of information within their organisation so that can grow and adapt to meet their ever-changing IT needs.

The Data Fabric is the data management architecture for the next era of IT, and NetApp intend to lead that era. With this recent enhancement of the Data Fabric and NetApp’s portfolio, there is no more need to be shouting “Tech Support!” Instead, we can all be Monet and paint a beautiful Vanilla Sky.

Hindsight from Insight

NetApp Insight Las Vegas 2016 Roundup

I was lucky enough to get to go to Las Vegas with the NetApp A-Team and attend the NetApp Insight Americas and APAC conference. I have attended Insight EMEA many times, but this was my first time attending it on US soil

I would be remiss if I did not mention that both the Vegas and Berlin events have the same number of high-quality breakout sessions. As expected, the majority of the sessions that were offered in Vegas are re-offered in Berlin. The organisation of the conference is the same, with things like Insight Central consisting of NetApp partners and vendor showcases. From that standpoint, it felt like I could very well have been at the EMEA conference. There is also a high number of NetApp technical employees on hand to debate different deployment methodologies which is a great reason in its self to attend.

However, Vegas did seem a lot more relaxed, and with over twice as many attendees there, it somehow felt quieter due to the size of the conference centre. There’s also a lot more going on in the evenings, (even just within the Mandalay Bay Hotel, never mind the rest of Vegas) with lots of opportunities for delegates to mingle and converse.

At this year’s conference, NetApp announced 16 new products! This is a huge amount for any company, and I think it just goes to show how NetApp are trying to stay at the leading edge of the storage industry. There were disk shelves and controllers announced, and if you would like to know more about the new controllers, see my previous post here. There was also an update to ONTAP Select as well as the arrival of ONTAP Cloud for Azure, all made possible by the release of ONTAP 9.1. There was a lot of messaging in both the general sessions and in the breakouts geared towards DevOps and this new way of deploying applications either on premises or in the cloud.

This year we also had the joy of SolidFire joining in, and with a raft of sessions available, this technology did prove popular. The two-hour deep dive by Andy Roberts was the third-most attended session of the conference, and the SolidFire Hands-On Lab was the third-most requested. They also announced the integration of SolidFire into FlexPod, which my A-Team colleague Melissa Wright (@vmiss33) coined the “DevOps workhorse.” It is a perfect tag line, and one I am going to start to use.

NetApp Insight also gives you the opportunity to take NetApp certification exams, so I thought I should try some. I passed two exams whilst there: the updated FlexPod design NS0-0170 and the new Hybrid Cloud NS0-0146, which give me the NCSA accreditation. These came with some lovely luggage tags courtesy of Liz Burns from NetApp University, to  add to the certificates which I already held. This is a great way to provide value back to your employer for attending if you need a stronger reason to attend. It’s best to book your exam before you get there as it can be very busy, and you may have to wait around for a while for a walk-in appointment.

A nice colourful collection

If you are new to SolidFire and want to understand how it’s managed, the two-hour deep dive mentioned earlier is a great place to start. It’s a great mix of slideware and demonstration on how to configure various key features of the Element OS. I would also recommend Val Bercovici’s (@valb00) “Why DevOps will move to the ‘lean’ cloud” break out. This session will help you understand the shift in application development and what you can do to try and keep pace and remain relevant.

NetApp now seem to be pivoting towards messaging that helps the developer and the DevOps team, providing products and tools that will integrate into their style of working as we have seen over the years with NetApp. Below is the link to the scenario covered in the General session on the third day. I think it provides good insight into how the pace of application development is changing, the tools that this new breed of developer is adopting and using, and also how NetApp is taking this methodology seriously (as evidenced by the fact that they have a site with a host of tools and scripts aimed purely at DevOps). Also embedded in the picture below is the link to the scenario acted out on stage during the general session.

I would also recommend looking into the sessions on VMware’s vVols functionality. They’re a great primer on this area of VMware’s evolving portfolio and they also show how NetApp can utilise this ever-improving technology. Andy Banta (@andybanta, who wrote an insightful blog on the topic and appeared on GreyBeards on Storage Ep. 36) and Josh (‘the intern’) Atwell (@Josh_Atwell) gave a joint session on how SolidFire differs from conventional storage arrays in their implementation and how best to utilise policy-based storage with SolidFire. Then there was Andreas Engel from NetApp and Pete Flecha (@vPedroArrow) from VMware who provided a deploy, implement, and troubleshoot session which was almost as popular as Pete’s session at VMworld. It illustrated some handy tips, tricks, and gotchas that a lot of the audience then took with them as they headed to the Hands-on Labs to get up to speed with vVols.  I would also keep an eye out for the Inform and Delight sessions, including a great one by Veeam on “Closing the Door on the Data Center Availability Gap.” And let’s not forget the “Dave and Dave show,” which is a must-see attraction.

Also in Vegas this year attending NetApp Insight for the first time was vBrown bag. Their online presence has been helping IT professionals become more proficient with virtualisation for the past six years and is a must point of call for anyone chasing a VCP or other certification, due to the wealth of knowledge on their site. They were there to expand their ever-increasing field of topics, and one of the presentations recorded was Sam Moulton (@SamMoulton), Champion of the NetApp A-Team(@NetAppATeam) with A-Team member Trey Davis (@ntap_seal), Senior Consultant from iVision in Atlanta providing some insight into the NetApp A-Team and what we do. This short discussion (embedded within picture) will hopefully help people understand the team better and where we fit within the ecosystem.

For more information on the A-Team’s presence in Las Vegas this year, check out the session called “Birds of a Feather: Walk the line with the A-Team” which is hopefully on the site for review. There will be a strong presence in Berlin, so come up and talk to us or send us a tweet.

One of the highlights during the opening of the third general session was the reel put together from the carpool Karaoke. I would urge you to have a look and a laugh.

This was a great conference, with a phenomenal amount of superb content, too much to take on board in the four days but I will enjoy reviewing over the next few weeks. I am thankful to my employer for letting me attend and I now feel invigorated and more confident to go out and have discussions and point out why customers should be looking at NetApp for their cloud, hybrid, and on-premises storage needs. If you are heading to Berlin, then I will hopefully see you there.


ONTAP 9 A new flavour with plenty of features

 

Name change

NetApp recently announce the upcoming release of their flagship operating system for their FAS and AFF product lines. ONTAP 9 as you can glean from the name is the ninth iteration for this OS which like a fine wine keeps getting better with age. Some of you will also have noticed the simplification of the name, no more clustered, or data, just simply ONTAP. The reality is clustering is the standard way to deploy controllers which store data, so it’s not really necessary to repeat that in the name, a bit like Ikea telling you the things you can put inside a Kullen (or Hemnes or Trysil which are all improvements over the Hurdal). But the most important thing about this change is the numeral at the end, 9. This is the next new major release of the operating system providing all the features that were available in 7-mode but also so much more.

So now that we have got that out of the way let’s see what else
has changed….

New features

Let’s take a quick look at some of the new features; so grab a pen (or for the millennials your phone camera):

  • Firstly I think I should mention you can now get ONTAP in three different varieties dependant on your use case. The appliance based version, ONTAP; the Hyperscaler version, ONTAP Cloud; and the software only version ONTAP Select. This should allow for management of data where ever it exists.
  • SnapLock – Yes the feature that everybody wanted to know where it had gone when comparing cDOT with Data ONTAP 7-mode, yet less than 5% of systems worldwide used (according to ASUP) is back. WORM functionality to meet retention and compliance requirements.
  • Compaction – A storage efficiency technology that when
    combined with NetApp’s inline deduplication and compression allows you to fit even more into each storage block. More on this technology in a later post.
  • MetroCluster – Ability to scale out to up to 8 nodes. We can now have 1, 2 or 4 nodes per site as supported configurations. NetApp have also added the ability to have non mirrored aggregates on a MetroCluster.
  • On-board Key manager – which removes the need for an off box key manager system when encrypting data.
  • Windows Workgroups – Another feature making a return is the ability to setup a CIFS/SMB workgroup so now we don’t need an Active Directory infrastructure to carry out simple file sharing.
  • RAID –TEC – Triple Erasure Coding expanding on the protection provided by RAID –DP. Allowing us to add triple parity support to our RAID groups, this technology is going to be crucial as we expand to SATA drives in excess on 8TB and SSD beyond 16TB.
  • 15TB SSD support – Yes you read that right, NetApp are one of, if not the first, major storage vendor to bring you 15.3TB SSDs to market. We can utilise these with an AFF8080 giving you 1PB of guaranteed effective capacity in a 2U disk shelf!!! To continue that train of thought we could scale out to 367TB effective AFF capacity within a single cluster. This will radically change the way people think about and design datacentres of the future. By shrinking the required hardware footprint we in turn reduce the power and cooling requirements, lowering the overall OPEX for the datacentres of the future; this will lead to a hugely reduced timeframe for return on investment on this technology, which in turn will drive adoption.
  • AFF deployments – With ONTAP 9 NetApp are introducing the ability to rapidly deploy applications to use the storage within 10 minutes with one simple input screen and this wizard follows all the best practices for the selected application.

Upgrade concerns

One of the worries people previously had with regards to NetApp FAS systems was how to upgrade to a new version of the OS for your environment, especially if you had systems at both primary and DR.

Version independent SnapMirror which arrived with 8.3 is great if you have a complex system of bidirectional water-falling relationships as planning an upgrade prior to this
needed an A1 sized PERT chart to plan the event. Now NetApp allow for an automated rolling upgrade around a cluster, it should mean for those customers out there who have gone for a scale out approach to tackling their storage requirements (and I salute you on your choice) it’s the same steps if you have 2 or 24 controllers. Today you can undertake this with three commands for complete cluster upgrades which is such a slick process, heck you can even call up the API from within a PowerShell script.

How does it look?

If you look below I have a few screen shots showing some of the new interface including the new performance statics that OnCommand System Manager can now display.

Notice the new menu along the top. This helps to make moving around a lot easier.

Here we can see some of the performance figures for a cluster, as this is a sim I didn’t really drive too much IO at it, but it will be very useful once in production giving you insight to how your cluster is performing at 15 second intervals.

Another nice feature of the latest release is the search ability, which I think will come into its own in larger multi-protocol installations of several PB, helping hone in on the resource you are after quicker.

First impressions

For this article I am using a version in a lab environment and from its slick new graphical interface (see above) to the huge leaps made under the covers this OS keeps getting stronger. The GUI is fast to load even on a sim, the wizards methodical, the layout intuitive and once you start using this and have to jump back onto an 8.x version, as I did, you will appreciate the subtle differences and refinements that have gone into ONTAP 9.

Overall takeaways

With the advent of ONTAP 9 NetApp have also announce a 6 month cadence for future releases making it easier to plan for upgrades and improvements which is good news for those shops who like to stay at the forefront of technology. The inclusion of the features above and the advancements made under the cover should hopefully illustrate to you that NetApp is not a company who rests on their laurels but strives for innovation. The ability to keep adding more and more features yet making it simpler to manage, monitor and understand is a remarkable trait; and with this new major software release we get a great understanding of what this company hopes to achieve in the coming years.

This is also
an exciting upgrade for the Data fabric, as mentioned above ONTAP 9 is now available in 3 separate variants – engineered for FAS & AFF; ONTAP Select for Software Defined Storage currently running on top of vSphere or KVM; and ONTAP Cloud running in AWS and soon Azure. Businesses can now take even greater control of their data as they move to a bimodal method of IT deployment. As more and more people move to a hybrid multi-cloud model we will see people adopting these three options in varying amounts to provide the data management and functionality that they will require. As companies mix all three variations we get
what I like to call the Neapolitan Effect, which is probably one of the best of all ice-cream flavours; to their storage strategy, delivering the very best data storage and management wherever needed which is thanks to the ability of ONTAP to run simply anywhere.

So go out and download a copy today!