Expanding NetApp HCI

NetApp recently updated the version of their HCI deployment software to v1.31. This version contained several new features to help in deploying a NetApp HCI environment. It’s been several months since I initially deployed our demo kit, and I felt it was time to revisit this process and see what has changed.

One welcomed new feature is the removal of the reliance on having a DHCP server that covers both your 1Gbe management and 10/25Gbe data networks. Whist this is a nice idea to help you get up and running and is something easy to configure in the lab, having DHCP running within a production SAN is not exactly common practice. You could either set one up or spend time configuring static addresses, which could be time-consuming, especially if you had half a dozen or so blades.

The other new feature that caught my eye was the ability to use the NetApp Deployment Engine (NDE) to expand a NetApp HCI environment. As previously mentioned in an earlier post and video (here), adding a SolidFire storage node to an existing cluster is quite easy (in fact, it was a design methodology when they created Element OS), but adding an ESXi node is quite a labour-intensive task. It is great to see that you can now add these quickly through a wizard.

To start the expand process, simply point your browser to the following:

https://storage_node_management_ip:442/scale/welcome
where you are greeted by the following landing page:

As you can see, it wants you to log into your environment. You may notice NetApp have updated the text box to show the password once typed as you can see from the eye icon at the end of the line.

To test this new methodology instead of buying more nodes, (which would have been nice) I removed both a single storage and compute node from their respective clusters and factory reset them. This allows me to test not only the addition of new nodes into existing clusters but also the removal of the DHCP or static IP addressing requirements before deployment.

Once logged in the NDE scale process discovers any and all nodes available and is where you can select which of these you would like to add to your environment.

After agreeing to the VMware EULA, you are asked to provide the VC’s details and then to select the datacentre and cluster you wish to add the node to. These steps are only present if you are adding compute nodes.

After giving the compute node a root password, you are taken to the “Enter the IP and naming details” page.

Finally, NDE scale takes you on to a review screen as these three screenshots (headings fully expanded for visibility) show.

Once reviewed, click the blue “Add Nodes” button. This initialises the now familiar NDE process of setting up NetApp HCI that can be tracked via a progress screen.

The scaling process for the addition of one compute and one storage node took just under half an hour to complete. But the real benefit is the fact that this scaling wizard can set up the ESXi host plus networking and vSwitches as per NetApp HCI’s best practices whilst at the same time adding a storage node into the cluster. That isn’t the quickest thing to do manually, so having a process that does this for you speedily is a huge plus in NetApp’s favour especially if you have multiple hosts. It’s clear to see the influence that the SolidFire team had in this update, with the ease and speed in allowing customers the ability to expand their NetApp HCI environments with NDE scale. I look forward to the features that will be included in upcoming releases of NetApp HCI and if hyperconverged infrastructure is all about speed and scale then this update gives me both in spades.

Advertisements

VMC NetApp Storage

Last week at VMworld, NetApp announced a new partnership offering with VMware whereby VMware Cloud on AWS (VMC) would be able to utilise NetApp Cloud Volumes Service. Currently in tech preview, let’s take a look at these two technologies and see how they can work together.

VMware Cloud on AWS

Firstly, let’s review the VMware cloud offering. The ability to run vSphere virtualised machines on AWS hardware was announced at VMworld 2017 and was met with great approval. The ability to have both your on-premises and public cloud offerings with the same abilities and look and feel was heralded as a lower entry point for those customers who were struggling with utilising the public cloud. The VMware Cloud Foundation suite (vSphere, vCenter, vSAN, and NSX) running on AWS EC2 infrastructure is now available, but it is sold, delivered, and supported by VMware.

There are several advantages with this:

  • Seamless portability of workloads from on-premises datacentres to the cloud
  • Operation consistency between on-premises and the cloud
  • The ability to access other native AWS services, not to mention the fact that AWS data centres appear around the globe
  • On-demand flexibility of being able to run in the cloud

With VMware running the suite themselves rather than informing customers how to deploy, set up, and run it, a customer could be ordering and utilising a new vSphere offering within an hour. With VMC, the customer has the choice of where to run their workload, with the flexibility to migrate it back and forth between their private data centre and AWS with ease.

Cloud Volumes Service

When NetApp moved into the cloud market several years ago, their first offering was the ability to run a fully-functioning ONTAP virtual appliance on AWS (later available on Azure). This offering, originally called Cloud ONTAP then ONTAP Cloud and more recently renamed Cloud Volumes ONTAP (CVO), is a cloud instance you spin up, set up, and manage like a physical box, with all the features you have come to love on that physical box, whether that be storage efficiencies, FlexClone, SnapMirror, or multi-protocol access. It was all baked in there for a customer to turn on and use.

More recently, NetApp has launched Cloud Volume Service (CVS). This service is sold, operated, and supported by NetApp, providing on-demand capacity and flexible consumption, with a mount point and the ability to take snapshots. It is available for AWS, Azure, and the Google Cloud Platform. The idea behind Cloud Volumes Service is simple: you let NetApp manage the storage, so you can concentrate on getting your product to market faster. Cloud Volumes Service gives you the file-level access to capacity required with a given service level in seconds. It also comes with the ability to clone quickly and replicate cross-region if required whilst providing always-on encryption at rest. That’s why over 300,000 people use NetApp Cloud Volumes Service already.

There are three available service levels: Standard, Premium, and Extreme with ranging performance of 16, 64, or 128KB per quota GB (these are levels, not guarantees).

(Example pricing as of 10 July 18) https://docs.netapp.com/us-en/cloud_volumes/aws/reference_selecting_service_level_and_quota.html

With the three different performance levels at varying capacities, you can mix and match to meet your requirements. For example, let’s say your application requires 12 TB of capacity and 800 MB/s of peak bandwidth. Although the Extreme service level can meet the demands of the application at the 12 TB mark, it is more cost-effective to select 13 TB at the Premium service level.


Partnership

Let’s take a look at the options that we now have. We have NetApp Private Storage (NPS), where a customer owns, manages, and supports a FAS system in a datacentre connected to AWS via a dedicated Direct Connect. We have the ability to deploy an instance of Cloud Volumes ONTAP from the AWS marketplace which the customer manages and connects to the infrastructure via an elastic network interface (ENI). Or we have the Cloud Volumes Service provided and managed by NetApp, connected to AWS via a shared Direct Connect. All three of these can be utilised to connect to VMC on AWS. These currently supported configurations have the guest connected using iSCSI, NFS, and/or SMB via Cloud Volumes Service, Cloud Volumes ONTAP, and NPS.

This current use case available to all is where the Guest OS would access storage via iSCSI, SMB, and or NFS using CVO. With no ingress or egress charges within the same availability zone and the ability to use the Cloud Volumes ONTAP data management capabilities, this is a very attractive offering to many customers. But what if you wanted to take that further than just the application layer? This is what was announced last week.

This announcement is for a tech preview of datastore support via NFS with Cloud Volumes Service. This is a big move. Up to this point, datastores were provided via VMware’s own technology, vSAN. By using CVS with VMC, you are gaining the ability to manage both the compute and the storage as if it were on the premises, not where it exists in the cloud.

As you can see, Cloud Volumes Service is supplying an NFS v3 mount to the VMC environment.

As this is an NFS mount from an ONTAP environment with no extra configuration, you can gain access to the snapshot directory.

Moving forward, VMC will be able to access NetApp Private Storage to provide NFS datastores, allowing customers to keep ownership of their data whilst also allowing them to meet any regulatory requirements. In the future, Cloud Volumes ONTAP will be able to provide NFS datastores to a VMC environment. There are several major use cases for cloud in general, and VMC with Cloud Volumes provides increased functionality to all these areas, whether that be disaster recovery, cloud burst, etc. The ability to provide NFS and SMB access with independent storage scale backed by ONTAP is a very strong message.

If you are considering VMC, this is a strong reason to look at Cloud Volumes to supply your datastores and decouple their persistent storage requirements from their cloud consumption requirements or exceed what vSAN can do.