Expanding NetApp HCI

NetApp recently updated the version of their HCI deployment software to v1.31. This version contained several new features to help in deploying a NetApp HCI environment. It’s been several months since I initially deployed our demo kit, and I felt it was time to revisit this process and see what has changed.

One welcomed new feature is the removal of the reliance on having a DHCP server that covers both your 1Gbe management and 10/25Gbe data networks. Whist this is a nice idea to help you get up and running and is something easy to configure in the lab, having DHCP running within a production SAN is not exactly common practice. You could either set one up or spend time configuring static addresses, which could be time-consuming, especially if you had half a dozen or so blades.

The other new feature that caught my eye was the ability to use the NetApp Deployment Engine (NDE) to expand a NetApp HCI environment. As previously mentioned in an earlier post and video (here), adding a SolidFire storage node to an existing cluster is quite easy (in fact, it was a design methodology when they created Element OS), but adding an ESXi node is quite a labour-intensive task. It is great to see that you can now add these quickly through a wizard.

To start the expand process, simply point your browser to the following:

https://storage_node_management_ip:442/scale/welcome
where you are greeted by the following landing page:

As you can see, it wants you to log into your environment. You may notice NetApp have updated the text box to show the password once typed as you can see from the eye icon at the end of the line.

To test this new methodology instead of buying more nodes, (which would have been nice) I removed both a single storage and compute node from their respective clusters and factory reset them. This allows me to test not only the addition of new nodes into existing clusters but also the removal of the DHCP or static IP addressing requirements before deployment.

Once logged in the NDE scale process discovers any and all nodes available and is where you can select which of these you would like to add to your environment.

After agreeing to the VMware EULA, you are asked to provide the VC’s details and then to select the datacentre and cluster you wish to add the node to. These steps are only present if you are adding compute nodes.

After giving the compute node a root password, you are taken to the “Enter the IP and naming details” page.

Finally, NDE scale takes you on to a review screen as these three screenshots (headings fully expanded for visibility) show.

Once reviewed, click the blue “Add Nodes” button. This initialises the now familiar NDE process of setting up NetApp HCI that can be tracked via a progress screen.

The scaling process for the addition of one compute and one storage node took just under half an hour to complete. But the real benefit is the fact that this scaling wizard can set up the ESXi host plus networking and vSwitches as per NetApp HCI’s best practices whilst at the same time adding a storage node into the cluster. That isn’t the quickest thing to do manually, so having a process that does this for you speedily is a huge plus in NetApp’s favour especially if you have multiple hosts. It’s clear to see the influence that the SolidFire team had in this update, with the ease and speed in allowing customers the ability to expand their NetApp HCI environments with NDE scale. I look forward to the features that will be included in upcoming releases of NetApp HCI and if hyperconverged infrastructure is all about speed and scale then this update gives me both in spades.

Advertisements

VMC NetApp Storage

Last week at VMworld, NetApp announced a new partnership offering with VMware whereby VMware Cloud on AWS (VMC) would be able to utilise NetApp Cloud Volumes Service. Currently in tech preview, let’s take a look at these two technologies and see how they can work together.

VMware Cloud on AWS

Firstly, let’s review the VMware cloud offering. The ability to run vSphere virtualised machines on AWS hardware was announced at VMworld 2017 and was met with great approval. The ability to have both your on-premises and public cloud offerings with the same abilities and look and feel was heralded as a lower entry point for those customers who were struggling with utilising the public cloud. The VMware Cloud Foundation suite (vSphere, vCenter, vSAN, and NSX) running on AWS EC2 infrastructure is now available, but it is sold, delivered, and supported by VMware.

There are several advantages with this:

  • Seamless portability of workloads from on-premises datacentres to the cloud
  • Operation consistency between on-premises and the cloud
  • The ability to access other native AWS services, not to mention the fact that AWS data centres appear around the globe
  • On-demand flexibility of being able to run in the cloud

With VMware running the suite themselves rather than informing customers how to deploy, set up, and run it, a customer could be ordering and utilising a new vSphere offering within an hour. With VMC, the customer has the choice of where to run their workload, with the flexibility to migrate it back and forth between their private data centre and AWS with ease.

Cloud Volumes Service

When NetApp moved into the cloud market several years ago, their first offering was the ability to run a fully-functioning ONTAP virtual appliance on AWS (later available on Azure). This offering, originally called Cloud ONTAP then ONTAP Cloud and more recently renamed Cloud Volumes ONTAP (CVO), is a cloud instance you spin up, set up, and manage like a physical box, with all the features you have come to love on that physical box, whether that be storage efficiencies, FlexClone, SnapMirror, or multi-protocol access. It was all baked in there for a customer to turn on and use.

More recently, NetApp has launched Cloud Volume Service (CVS). This service is sold, operated, and supported by NetApp, providing on-demand capacity and flexible consumption, with a mount point and the ability to take snapshots. It is available for AWS, Azure, and the Google Cloud Platform. The idea behind Cloud Volumes Service is simple: you let NetApp manage the storage, so you can concentrate on getting your product to market faster. Cloud Volumes Service gives you the file-level access to capacity required with a given service level in seconds. It also comes with the ability to clone quickly and replicate cross-region if required whilst providing always-on encryption at rest. That’s why over 300,000 people use NetApp Cloud Volumes Service already.

There are three available service levels: Standard, Premium, and Extreme with ranging performance of 16, 64, or 128KB per quota GB (these are levels, not guarantees).

(Example pricing as of 10 July 18) https://docs.netapp.com/us-en/cloud_volumes/aws/reference_selecting_service_level_and_quota.html

With the three different performance levels at varying capacities, you can mix and match to meet your requirements. For example, let’s say your application requires 12 TB of capacity and 800 MB/s of peak bandwidth. Although the Extreme service level can meet the demands of the application at the 12 TB mark, it is more cost-effective to select 13 TB at the Premium service level.


Partnership

Let’s take a look at the options that we now have. We have NetApp Private Storage (NPS), where a customer owns, manages, and supports a FAS system in a datacentre connected to AWS via a dedicated Direct Connect. We have the ability to deploy an instance of Cloud Volumes ONTAP from the AWS marketplace which the customer manages and connects to the infrastructure via an elastic network interface (ENI). Or we have the Cloud Volumes Service provided and managed by NetApp, connected to AWS via a shared Direct Connect. All three of these can be utilised to connect to VMC on AWS. These currently supported configurations have the guest connected using iSCSI, NFS, and/or SMB via Cloud Volumes Service, Cloud Volumes ONTAP, and NPS.

This current use case available to all is where the Guest OS would access storage via iSCSI, SMB, and or NFS using CVO. With no ingress or egress charges within the same availability zone and the ability to use the Cloud Volumes ONTAP data management capabilities, this is a very attractive offering to many customers. But what if you wanted to take that further than just the application layer? This is what was announced last week.

This announcement is for a tech preview of datastore support via NFS with Cloud Volumes Service. This is a big move. Up to this point, datastores were provided via VMware’s own technology, vSAN. By using CVS with VMC, you are gaining the ability to manage both the compute and the storage as if it were on the premises, not where it exists in the cloud.

As you can see, Cloud Volumes Service is supplying an NFS v3 mount to the VMC environment.

As this is an NFS mount from an ONTAP environment with no extra configuration, you can gain access to the snapshot directory.

Moving forward, VMC will be able to access NetApp Private Storage to provide NFS datastores, allowing customers to keep ownership of their data whilst also allowing them to meet any regulatory requirements. In the future, Cloud Volumes ONTAP will be able to provide NFS datastores to a VMC environment. There are several major use cases for cloud in general, and VMC with Cloud Volumes provides increased functionality to all these areas, whether that be disaster recovery, cloud burst, etc. The ability to provide NFS and SMB access with independent storage scale backed by ONTAP is a very strong message.

If you are considering VMC, this is a strong reason to look at Cloud Volumes to supply your datastores and decouple their persistent storage requirements from their cloud consumption requirements or exceed what vSAN can do.

Gain Some IQ on AI

Today (1/8/18) NetApp announced a new partnership with NVIDIA and launched the NetApp ONTAP AI Proven Architecture. This strengthens their already growing foothold in this new and exciting branch of the IT industry and after what was announced today, ONTAP AI is surely going to have everyone talking. This meet in the channel play gives data scientists a proven architecture to use in their data pipeline for deep learning, avoiding design guesswork and allows for fast efficient deployments of AI environments.

Machine learning (ML) and artificial intelligence (AI) have some very unique demands from an IT perspective. Firstly, they both have a demand for huge amounts of information; a capacity requirement that is constantly growing. Second, they require that storage to respond with an ultra-low latency. Unlike big data you need to keep all the data generated and not burn the hay to find the needle so expandability over time is a must. And finally, the type of computation that they undertake is more suited to a GPU rather than a CPU.

Now whether you would class this as a modernise your infrastructure or a next generation Data Centre play one thing is certain this is definitely cutting-edge equipment. For example by using one NVIDIA DGX-1 is equivalent to replacing 400 traditional servers and if you look at Gartner’s top 10 picks for 2018 and beyond, the majority of these have an aspect of AI/ML in there so it’s probably only natural that we are seeing IT vendors moving into this space.

NetApp are announcing the ability to combine an AFF800A their flagship All Flash Array with Five NVIDIA DGX-1 with Tesla V100’s tied together over 100Gbe with a pair of nexus 3232C’s from Cisco which equates to 5000TFLOPS

Whilst the messaging around this offering highlights it as a future-proof play you don’t need to buy everything in one go; but instead build upon NetApp’s key messages of flexibility and scaling. But if you were to plan ahead or really did need to start big, there is no reason you could not have a twelve High Availability pair with sixty (60x) DGX-1 with close to 75PB. There is also no reason you couldn’t implement a data pipeline with an A700s or even A300 or A220 it all really depends on what performance and scalability you require. Tie this together with edge devices running ONTAP select for data ingest and then the ability to use Cloud Volumes ONTAP in the AWS or Azure or possibly FabricPool for an archival tier you can truly see why integrating the Data Fabric into this story is such a nice fit. Just imagine adding MAX Data into this mix and it will be like strapping on two F9 first stage boosters to this already Full Thrust rocket.

Now you may be thinking this is super computer niche corner case but in reality, it is being utilised in pretty much every industry vertical affecting almost every aspect of our daily lives from the finance industry to heath, automotive, retail, agriculture, oil and gas and even legal industries to name a few are already seeing a surge in software and companies dedicated to this as a way of doing business. We have the horror stories of Facebook and no doubt you have invested in one of the big three home automation voice recognition featuring Alexa, Siri or Assistant. Maybe you have travelled using Uber or Tesla’s autopilot or even Waze on your phone. Maybe you have a hobby like flying drones from DJI or utilise 3DR’s software, or you can’t work out without your Fitbit or Fenix, the point is you are providing data back to some central point that is analysed to give the company better decisions as to what to proceed to market with as a next generation product or where to improve something already in the field. Whist the luddites worry that AI will lead to Skynet and the doom of humanity it is probably better think of it in an advancement in human intelligence and another milestone down the path of evolution, and I look forward to seeing how this architecture develops.

NetApp – Not Just Surviving but Thriving

When you’re a company that has been around for over 25 years, some people might look at you like a dinosaur, slowly plodding along as the end of the world as you know it approaches. A lot of press has been made in recent years that puts NetApp in this light. Some have said that NetApp has just been plodding along, not in touch with the industry or its customers’ needs.

Yet in the last 6 months, this “dinosaur” has started to show its teeth. The stock price has gone from $37.43 in September to a high of $71.41, and with the announcements made yesterday, you can expect that to go higher.

With the newly announced AFF A800, NetApp is now able to provide sub 200 µs latency for workloads that have the most demanding data needs. That’s an order of magnitude better than previous generations!

Not only is the AFF A800 blazingly fast, it can handle huge amounts of traffic with 25GB/s throughput on an HA pair and the ability to have NVMe end to end from the server to the storage via NVMe over FC. If using 32 or 16Gb FC isn’t a requirement you can use ethernet speeds of 100Gbe, another industry first made by NetApp. With 12 pairs clustered together, you are talking 300GB/s throughput on a single management domain. That should meet the most demanding environments.

With a current run rate of $2.0B for their all flash business, having already shipped over 20PB of NVME, and with a 44% Petabyte year on year growth, NetApp’s flash business is not only going to increase in size in the future, but with numbers like this it will survive any extinction event.

But the announcements made yesterday are not just about end-to-end NVMe-accelerated performance. There were also more advanced cloud integration messages.

NetApp’s cloud strategy is geared towards enabling customers to deliver business outcomes for all IT workloads in cloud, multi-cloud, and hybrid cloud environments. To do this, you must modernise your data management from the edge, to the core, and to the cloud.

Fabric Pool is just one of the features designed to help you do just that. Fabric Pool enables automatic tiering of cold data, which means you can purchase a smaller system or get an even higher amount of consolidation on a single box. With the release of ONTAP 9.4, Fabric Pool has been improved to allow Azure as a capacity tier and ONTAP Select as a performance tier. It can now also tier from the active primary data set, which is something I am looking forward to testing soon.

So when you look at these and other announcements that NetApp made yesterday, if they are a “dinosaur,” I would put them in the meat-eating Velociraptor camp. And that’s one dinosaur you do not want to take your eye off.

Setting up FabricPool

Recently, I was lucky enough to get the chance to spend a bit of time configuring FabricPool on a NetApp AFF A300. FabricPool is a feature that was introduced with ONTAP 9.2 that gives you the ability to utilise an S3 bucket as an extension of an all-flash aggregate. It is categorised as a storage tier, but it also has some interesting features. You can add a storage bucket from either AWS’s S3 service or from NetApp’s StorageGRID Webscale (SGWS) content repository. An aggregate can only be connected to one bucket at a time, but one bucket can serve multiple aggregates. Just remember that once an aggregate is attached to an S3 bucket it cannot be detached.

This functionality doesn’t just work across the whole of the aggregate—it is more granularly configured, drawing from the heritage of technologies like Flash Cache and Flash Pool. You assign a policy to each volume on how it utilises this new feature. A volume can have one of three policies: Snapshot-only, which is the default, allows cold data to be tiered off of the performance tier (flash) to the capacity tier (S3); None, where no data is tiered; or Backup, which transfers all the user data within a data protection volume to the bucket. Cold data is user data within the snapshot copy that hasn’t existed within the active file system for more than 48 hours. A volume can have its storage tier policy changed at any time when it exists within a FabricPool aggregate, and you can assign a policy to a volume that is being moved into a FabricPool aggregate (if you don’t want the default).

AFF systems come with a 10TB FabricPool license for using AWS S3. Additional capacity can be purchased as required and applied to all nodes within cluster. If you want to use SGWS, no license is required. With this release, there are also some limitations as to what features and functionality you can use in conjunction with FabricPool. FlexArray, FlexGroup, MetroCluster, SnapLock, ONTAP Select, SyncMirror, SVM DR, Infinite Volumes, NDMP SMTape or dump backups, and the Auto Balance functionality are not supported.

FabricPool Setup

There is some pre-deployment work that needs to be done in AWS to enable FabricPool to tier to an AWS S3 bucket.

First, set up the S3 bucket.

Next, set up a user account that can connect to the bucket.

Make sure to save the credentials, otherwise you will need to create another account as the password cannot be obtained again.

Finally, make sure you have set up an intercluster LIF on a 10GbE port for the AFF to communicate to the cloud.

Now, it’s FabricPool time!

Install the NetApp License File (NLF) required to allow FabricPool to utilise AWS.

Now you’ll do the actual configuration of FabricPool. This is done on the aggregate via the Storage Tiers sub menu item from the ONTAP 9.3 System Manager as shown below. Click Add External Capacity Tier.

Next, you need to populate the fields relating to the S3 bucket with the ID key and bucket name as per the setup above.

Set up the volumes if required. As you can see, the default of Snapshot-Only is active on the four volumes. You could (if you wanted) select the individual or a group of volumes that you wanted to alter the policy on in a single bulk operation via the dropdown button on top of the volumes table.

Hit Save. If your routes to the outside world are configured correctly, then you are finished!

You will probably want to monitor the space savings and tiering, and you can see from this image that the external capacity tier is showing up under Add-on Features Enabled (as this is just after setup, the information is still populating).

There you have it! You have successfully added a capacity tier to an AFF system. If the aggregate was over 50% full (otherwise why would you want to tier it off?), after 48 hours of no activity on snapshot data, it will start to filter out to the cloud. I have shown the steps here via the System Manager GUI, but it is also possible to complete this process via the CLI and probably even via API calls, but I have yet to look in to this.

One thing to note is that whilst this is a great way to get more out of an AFF investment, this is a tiering process, and your data should also be backed up as the metadata stays on the performance tier (remember the 3-2-1 rule). So, when you are next proposing an AFF or an all flash aggregate on a 9.2 or above ONTAP cluster; then consider using this pretty neat feature to get even more capacity out of your storage system or what I like to now call your data fabric platform.

Casting Our Eye Over HCI

My previous blog post HCI – Hero From Day Zero discusses my initial findings and setup of NetApp’s Next Generation HCI solution. After reflecting on these for a while and chatting with my colleague @WelshMatador I have put together several videos around NetApp HCI where we take our conversation on air for your viewing pleasure.

In the first of our videos we tackle some of the HCI out of the box basics such as “What cabling does a NetApp HCI installation require?” or “How should I do this?”

In part two we look at the very slick NetApp Deployment Engine (NDE) and discuss initial setup. Part three looks at growing your environment and the process involved.

Over the next couple of weeks, we will add more videos covering different aspects of NetApp’s HCI platform so please check back soon as this page will be updated.

 

 

 

 

 

And if you haven’t got the bandwidth to stream the above here’s a nice close-up of the system used for the videos.

HCI platform

HCI – Hero From Day Zero

After a great reception at NetApp Insight 2017 (it was so good that actual orders pushed back our demo system), and thanks to NetApp, I have finally got my hands on their exciting new portfolio product.

First Impressions

We received an 8-node setup, 4 storage and 4 compute, which turned up as a pallet of IT equipment, which was a little unexpected at first, but upon review, it does mean that the hardware is a lot more manageable to get from the box into the rack. It all comes nicely packaged in NetApp branded cartons. The storage nodes also have the disks individually packaged for adding into the chassis.


So, upon first inspection of the blades/nodes, I can see NetApp have partnered with a hardware vendor who is renowned for producing server hardware. They feel sturdy and are well crafted. Adding them into the system is a smooth process and doesn’t need any excessive force, something I have seen with other blade systems in the past. Starting from the bottom and working up, we racked the two chassis to begin with. The important thing to note is the 3 strips of protective clear plastic film along the top of each chassis MUST be removed before installation. Once racked, it was on to adding the additional nodes into the chassis. We opted for a two and two approach with the two compute nodes in the top of the chassis and two storage nodes below.


The reason for this was there is extra air flow via the top of the chassis (hence removing the film) which will be of benefit to the compute nodes. But this is only a recommendation, any type or size of node can occupy any of the available slots. If you add a storage node to the configuration then you will also have to insert the accompanying drives. Again, make sure you add these into the corresponding bays in the front


Getting Setup

In preparation for deploying our HCI equipment, we have also deployed a management vSphere cluster (6.5) and in here amongst other things, we have created our PDC and SDCs each sharing responsibility for AD, NTP, DHCP for both the mgmt. and iSCSI networks, and most importantly, DNS. I can’t stress enough when it comes to networking: 9 times out of 10, it’s a DNS issue. Make sure you get your forward and reverse lookup zones correct.

What I have learned from the time I have spent with the NetApp HCI platform is understanding what this system requires from a networking perspective and setting that up is key to a successful deployment. The team and I had reviewed the documentation available on the NetApp support site (the prerequisites checklist and the installation workbook), yet our first attempt failed at 7% which we traced to a VLAN configuration issue on the switches. After that, it was pretty much plain sailing.

As you can see from below, we left a monitor connected up to one of the compute nodes and we can see it deploying ESXi in standard fashion.


We have factory reset the HCI kit several times to get an understanding of the different options during the NDE process, and it’s fair to say they are pretty self-explanatory (each option has a blue “i” next to it which goes into detailed information as to what you are configuring). One thing we did note is using the basic networking wizard and then flipping over to the advanced helped pre-populate pretty much all the fields, but gives you more control of what is assigned. We wanted to move the mNode from next to the VC to next to the MVIP for the SolidFire Cluster and simply changing the digits of the last octet turned the box red as unverified. To enable the engine to check the IP against everything else on the page and check it’s not in use requires you to delete the decimal point associated with that octet. You also can not separate the vMotion and management subnets without the use of a VLAN tag. So if you don’t add the tag before trying to separate these, it can be a bit unclear as to the engine’s methods without understanding how the physical network topology is designed to interact with the HCI platform. It’s good to see also that you cannot proceed until everything is properly inputted. Another handy feature is the ability to download a CSV copy of all the variables (passwords are redacted) just before you hit deploy.

By repeating the setup process, we got an idea as to the timing it takes and from your final review of the NDE inputs and clicking “Looks Good, lets Go,” we were seeing 6.0u3a deploy in just over 35 minutes and 6.5u1 going to the 55-minute mark. When watching the progress bars, it’s clear to see more time is spent deploying the VCSA with 6.5 which probably explains why it’s a lot easier to use and less buggy than its predecessor; I have been trying to move over to the appliance for a while now and with the work I have been doing with this HCI platform and 6.5 I am now a convert.


Up and Running

Once the NDE is complete you can click the blue button to launch vSphere Client which will connect to the FQDN as entered during the NDE. Once we are logged in to the client, we can see from the home landing page that the plugins for the SolidFire have been added – NetApp SolidFire Configuration (for adding SF clusters, turning on vVols, user management, joining up to mNode and NetApp SolidFire Management (for reporting, creating datastores and vVols, adding nodes drives etc)


NDE with create a DataCenter and containing Cluster with HA and DRS enabled and add the hosts to this. It also creates two datastores on the SolidFire cluster of 1.95TB in size and VMFS6 with SIOC enabled. Sadly, the current management plugin will only create VMFS v5 for any datastores you wish to create after initial deployment, so if you need/want v6 then you are going to have to destroy and recreate the newer version onto the LUN, a minor issue but could become laborious is you have quite a few datastores. What is nice though is you can configure the SolidFire cluster to provide vVols & datastores at the same time, and with it being a SolidFire back end, you get the guaranteed quality of service you expect for any storage provide from that platform.

Take Away

I have to say that I have been impressed by the HCI platform. From getting it in the door to racking and stacking and then progressing through the NetApp Deployment Engine, it has become a smooth and risk-free process. The guard rails of the NDE allow for a robust vSphere deployment yet allow you to tweak parts to fit it to your environment (e.g. you don’t have to deploy a new vCenter, you can join an existing) I also have mentioned above that it has helped win me over to using the VC appliance, and there will be no going back now for me. Having spent time working on the kit, I can fully understand the reasons NetApp have made in providing separate storage and compute nodes, and I am confident that customers will also see the benefit to truly flexible, independent scalability in an HCI deployment, not to mention the performance and fault tolerance of a shared nothing architecture. I look forward to the first customer reviews, and from the amount of quotes Arrow have been putting together recently on this product, it’s not going to be long before it is established as a leader in this market segment.


Next steps

So when it came to time to test the HCI platform I was chatting to a friend who informed me there was a VMware fling that could help. Now I had heard about the wonderous nature of the flings whilst listening to @vPedroArrow and @lost_signal on the Virtually Speaking Podcast a while back but in my current line of work hadn’t a need to use them until now. In my next post I will go into more detail on these and look at some of the results that I received.

 

Time to Stop with the Legacy Backup Solutions

A couple of months ago I attended the UK Veeam ON Forum in London and once my brain had registered that I am not on my way to the Aviva Stadium with several hundred of my fellow countrymen, but am in fact at an IT event things started to make more sense. This was to be a day packed full of information Veeam wanted to share with their partner community, and I am glad to say they didn’t hold back. My key takeaway from the day is the amount of people who lack the proper ability to meet recovery objectives defined within their business and as organisations move to a hybrid cloud that inability is only increasing. When surveyed by ESG, 4 out of 5 respondents disclosed that their current infrastructure cannot keep up with the IT needs of their company and users suffer. Follow up questions in this survey also highlighted that 3 out of 4 organisations were exceeding their allowance for data lost due to an outage. This is mind blowingly huge!

These two areas are referred to as the Availability Gap and Protection Gap and it is clear to see that with the drive for businesses to do more with less and leverage the public cloud we are seeing data bears these burdens and is ultimately losing. I have long proposed to companies “How long can you let your employees sit idle? How damaging for your brand would it be if you could not transact orders or your website was down? And how much money would your business loose in that period of time?” These are just some of the questions that must be asked and answered correctly throughout a piece of data’s life to properly assess and implement a strategy to accurately and properly defend and deliver it with speed back to the business. There seems to be a drive for organisations to implement new technology and upgrade existing hardware but the protection and availability are a secondary or tertiary response. Data is a company’s most valuable asset. Caesar’s Entertainment Corporation’s single most valuable item on their books isn’t the faux Roman architecture hotel in the middle of Las Vegas strip; it’s the company’s big data loyalty program that has been running since 1998 and hold over 45 million member records; with a figure of $1B attached.

This brings together two interesting points how, many companies out there are not taking the time and care with data that concerns me as a customer and also, how insufficient the understanding of the value of the data a company holds about a person or organisation. As a personal consumer, I would be slightly put out and may consider moving my custom if I contacted my butcher for instance to ask for “the same cut I had last week”, yet he had no record of the transaction and my meal is not turning out as nice as I wanted it. But in business what if I were to contact a supplier and say, “you know that last batch of X was great, can I get another 1000 or 10,000 ASAP and put it on my account” well without the proper information at hand I can neither see what that order was to fulfil it again, check if the customer has credit to place such an order and where to ship it once produced. Care and consideration must be taken with this data and how we safeguard it should be as intrinsic as the application that creates and accesses the information.

So why Veeam you may ask? Simply put because they understand the 24 by 7 always on business environment we now live in. This is a 21st century company not constrained by 1980’s backup architectural design and a leader of innovation within their field. So let’s stop talking backup and look at what we really demand: Availability. Over the coming weeks I am going to be looking into why Veeam and their ever-increasing portfolio of products, what it can deliver, how it can quickly and effortlessly meet a company’s availability and protection needs whether on site or in the cloud.

To read the full ESG commissioned report look here

Reflecting on VMworld EMEA 2017

Back in September I found myself on a Sunday morning flight from London to Barcelona to explore one of the largest technical conferences held this side of the pond, VMworld Europe. The conference kicked off on Monday with a partner day and a general session where Pat Gelsinger @PGelsinger CEO said two things that really stood out for me. Number one, he thanked the audience for their continued support of VMware products and requested that we “go all in” with the ever-increasing portfolio. Number two he said, “Today is the slowest day of technical innovation of the rest of your life” and boy is he not wrong.

Now I have been working with VMware for well over a decade playing with ESX and GSX and picked up my fist VCP certification on version 2.5. But focusing on the storage industry and products in that space my VMware focus had waned, and whilst I had heard mention about new products and features that VMware had developed, until I got to this conference I didn’t realise how vast and how varied this portfolio had grown. To hear Pat and co band around the slogan if you will of “Any application on any device on any cloud” you get to see how much of a reality this is.

This message really drove home during day two’s keynote when Purnima Padmanabhan @PPadmanabhan VP Product Management, Cloud Management Business Unit and Chris Wolf @cswolf VP and CTO Global Field and Industry used the case of fictional pizza company, Elastic Sky Pizza, to show how this company that hadn’t adapted to the changes in the market place were now circling the drain. At the point that we enter the story a new CTO is literally just been appointed to the role and it’s their priority to turn the failing new website, app and ordering system around and make sure that it is delivered on time and on budget.

This section of the keynote carries on for nearly an hour, and while it does feel a tad long in places, it is really interesting to see how the many different businesses within VMware had developed products that interact with each other to deliver a common goal.

AppDefence is one product that stood out. This is a serious piece of kit that has the intelligence to understand the intrinsic way applications and the flow of data should work; and reports if anything deviates outside its allowed parameters. This is a huge leap in proactive application security allowing both the developers and security teams to work hand in hand to deploy robust applications. I feel that in time this will become part of a standard VMware environment, as I believe what AppDefence does in allowing you to understand what exactly is going on within your environment is the missing feedback loop you need to truly deliver an SDDC.

We also got a look at VMware cloud on AWS using elastic DRS and also HCX. Now these
solutions look cool and I can’t wait to try them out when I have some free time. Another great highlight was Pivotal Container Services (PKS) which lets you run an enterprise grade Kubernetes deployment on your site allowing for controlled project deployment for your DevOps teams. Not only was it easy to deploy it works hand in hand with NSX to build in security from day one.

With all of these technologies you get a sense of what VMware are trying to achieve with their portfolio, the ability to bridge the Hybrid cloud, and you can see the direction the company is heading in over the next twelve months. I recommend that if you have a chance, take a look at this recording on YouTube to truly appreciate the interaction and productivity you can achieve with the VMware ecosystem. But before you do buckle your seat belt as things move pretty fast.

If you would like to know a bit more about this and some more of what happened in Barcelona then please have a listen to the episode Reflecting on VMworld 2017 of Arrow bandwidth with both myself and Vince Payne.

A Tale of Two Cities

This year, I was fortunate enough to attend both NetApp Insight events, thanks to @SamMoulton and the @NetAppATeam; and whilst there are normally differences between the conferences it’s fair to say that this year they were polar opposites. Whilst the events team try extremely hard to make sure there is minimal difference between the two, this year there were things outside their control. Yes, there were pretty much the same topics and breakout sessions with the same speakers, and there was an even representation of sponsors and technology partners at both events, yet things were different.

Without going into too much detail, on Sunday 1st of October, the worst case of domestic terror within the US happened to coincide with the arrival date, location of NetApp Insight Last Vegas, and the hotel where majority of attendees were staying. This changed the overall mood and perception of the event. Las Vegas turned into a more sombre affair with more a perfunctory feel that was only right given the events that occurred. Yet, NetApp dug deep, showing skill, resolve and sincerity, and they still delivered an excellent event to be proud of. I would also like to give a huge thank you to the LVPD, first responders, EMT and anyone else associated in helping those who were caught up in this tragedy. During this tragic turn of events, true human nature and kindness shone through.


The NetApp Insight event that occurred two weeks ago in Berlin was a completely different beast. It was our fourth year at the Messe Berlin & City Cube, and like any recurring venue, it started to feel more familiar—like an old friend that we hadn’t spent much time with recently. Some in the partner community objected to the event being held in Berlin for the fourth year in a row. From my perspective, the high quality of content delivered in breakout sessions during the conference is the main draw for delegates, and whilst it’s nice to visit a new city, you have to feel for those who work in the Americas (just a little bit), where pretty much every conference is held in Las Vegas. (Veeam ON, which was held in New Orleans this year and Chicago next year, being the major exception.) After four years, I feel I’m only scratching the surface of what Berlin has to offer. I’ll probably miss not being there next year, but we are following the crowds to Barcelona and we will be there in December 2018.

Having attended many NetApp Insight events over the years, it’s fair to say that on Day 1 of Insight Berlin, there was a different, more positive, feel to the conference, one that has eluded it over the last five or so years. Those showing up were excited to what the four days would hold. The employees speaking or manning booths were eager to be meeting and discussing the advancements made in the past 12 months. No longer driving home the message about speeds and feeds but talking about the services and solutions that NetApp products bring to the table, and with over 250 services and solutions, that’s a massive number of ways to make a data fabric that’s fit for you. It was great to see partners from across EMEA wanting to learn more about the Next Generation Data Centre (NGDC) portfolio and understand how best to adapt it to their individual customer requirements. I also sat in on a few sessions to top up on those missed due to being cancelled in Las Vegas, and it’s fair to day that everyone’s heads (mine included) were more in the game.


Berlin started with a drum roll, quite literally, with the curtain raising act from the amazing Blue Devils Drum Line, all the way from California (check them out on YouTube). From @Henri_P_Richard, we heard a strong confident message about how to change the world with data. We learned how digital transformation is gaining momentum and empowering businesses as they move from survivors to thrivers in our data-centric era. Henri quite nicely backed this up by pointing out that NetApp are the fastest-growing of the top 5 total enterprise storage system vendors. It is also the fastest growing All Flash Array vendor, the fastest growing SAN vendor, and the world’s number 1 branded storage OS. But what really underlined these points was the reported earnings that came out Wednesday evening, which moved NetApp’s share price from $45 to $56 (as of writing), on the back of net revenues increasing 6% YOY, and raising their outlook for the rest of the fiscal year.


There were several stand out points made at Berlin. At the top of probably everyone’s list is the bold move that NetApp are making into the HCI space and the excellent untraditional tack they have taken to position this technology. Listening to NetApp’s detractors, I feel that there are a lot of established first-generation HCI vendors that fear what NetApp will bring to the table. (You only have to look at the AFA marketplace and how NetApp have delivered 58% global YOY growth.) As a distributor, we applied to get a demo unit to help promote this product, but due to the huge demand from customers, that box has had to slip down the priority delivery chain, which happens to show that despite the FUD being bandied about, customers out there really value the message and benefits that NetApp HCI is bringing.


One of my highlights came during the Day Two general session when Octavian Tanase and Jeff Baxter outlined the direction NetApp are heading in over the upcoming months. One of the many interesting technologies that they demonstrated during this section of the keynote was the Plexistor technology, which NetApp acquired for $32M in May 2017. With Plexistor, NetApp are able to not only increase the throughput over AFF by ten times, from 300K IOPS to 3M IOPS, but they also demonstrated that they can reduce the latency by seventy fold from 220µs to 3µs!! Now, that’s a performance improvement of two separate orders of magnitude.

This is, at the moment, a very niche technology, which will only benefit a very small number of the storage population to begin with; but it does illustrate that NetApp are not only delivering some of the most advanced endpoints on the globe today but also the boundaries they are pushing up against and stepping well past to stay at the forefront of data management. Working with Storage Class Memory and NVMe to deliver what will be the norm of next generation of storage appliances, NetApp are demonstrating that they have a clear understanding of what the technology is capable of whilst blazing a trail that others desire to follow.

For those of you how missed the event (shame on you), NetApp UK are holding a one day event on the 12 of December at their UK main office (Rivermead) for partners; and for those with a NetApp SSO, you can now access all the great content recorded during both insight events and download copies of the presentations to review at you leisure. When you have done all that, ask yourself, “How are you going to change the world with data?”