Getting Started with DevOps

Part of my job allows me to travel and meet partners up and down the UK helping enable them to properly sell the NetApp portfolio. One thing I have noticed is that even the more proactive partners are still chasing after the modernize aspect of the three IT imperatives as an area they are going to market with; some seem to be slowly adopting build and yet the majority are avoiding inspire.


These 3 imperatives align with the three key parts to the Data Fabric, and each has a place within every organisation. Making sure your customers understand the Data Fabric story and how it relates to their business is something I task each partner with and if need be, provide support.


So for all my presentations and education there still seems to be a chasm that needs to be crossed by our partners, and with some of the feedback I have received it has come to my attention that there are quite considerable differences between selling new hardware and selling cloud products and services.

One of the pain points seems to be a lack of understanding and training around cloud environments and the fact they all use a different nomenclature leads people often to either search out a Rosetta Stone or give up.

First I would suggest that if anyone is serious in getting to know what the DevOps community is all about then you should read “The Phoenix Project” (by Gene Kim, Kevin Behr and George Spafford) and if you enjoyed it the accompanying material “The DevOps Handbook” (Gene Kim, Jez Humble, Patrick Debois and John Willis). These two books provide a great insight into what is happening within IT oragansiations across the globe today.


If you have read it and don’t know where to go from there; or if a 250-page IT novel doesn’t interest you then Arrow can help.

Let me start by ask to you whether you know your Jenkins from Jarvis, your Trident from your spear, SCRUM from your ruck, CI/CD from your AC/DC? Or how about your containers from your Tupperware, Mode 1 from Mode 2, GitHub from a wine bar, Kubernetes from K8s, Prometheus from Sulacco or your Hedvig from Hedwig?

Do you understand modern scalable dynamic application development and how these are deployed in today’s hybrid cloud world using microservices, services meshes and declarative API’s?

If you have issues identifying the terms above and feel they are more akin to Pokémon than to IT well fear not! Today we are launching the Arrow Build series.

The idea behind this is for it to be a series of events to help you and your organisation get up to speed and have the skill set to work with these innovative application developers and born in the cloud businesses.

Launching with the first event in our London office, this half day hands on session will introduce some of the terms you are likely to hear and also provide a great look into the modern application development framework.  If this is successful and there is demand for it up North then we may repeat the event or host something similar in Harrogate, but I would strongly urge all partners to come and attend the first session. Not only will you gain some new skills (which to be honest you want to so you can put them on your LinkedIn profile) but will allow us to create and grow a UK community. With the British government (when not arguing about Brexit) striving to make us a world leader in AI (we can argue the AI v ML stance later) many of these skills are applicable and if you prefer your GUI to CLI then there are plenty of things we can do to help you understand the landscape.

With your wittiest T-shirt on, I look forward to seeing you in London on the afternoon of the19th of September, bring along your colleagues, stay and have a beer with us after, and until then:

While (alive)
{
   eat();
   sleep();
   code();
}
Advertisements

Expanding NetApp HCI

NetApp recently updated the version of their HCI deployment software to v1.31. This version contained several new features to help in deploying a NetApp HCI environment. It’s been several months since I initially deployed our demo kit, and I felt it was time to revisit this process and see what has changed.

One welcomed new feature is the removal of the reliance on having a DHCP server that covers both your 1Gbe management and 10/25Gbe data networks. Whist this is a nice idea to help you get up and running and is something easy to configure in the lab, having DHCP running within a production SAN is not exactly common practice. You could either set one up or spend time configuring static addresses, which could be time-consuming, especially if you had half a dozen or so blades.

The other new feature that caught my eye was the ability to use the NetApp Deployment Engine (NDE) to expand a NetApp HCI environment. As previously mentioned in an earlier post and video (here), adding a SolidFire storage node to an existing cluster is quite easy (in fact, it was a design methodology when they created Element OS), but adding an ESXi node is quite a labour-intensive task. It is great to see that you can now add these quickly through a wizard.

To start the expand process, simply point your browser to the following:

https://storage_node_management_ip:442/scale/welcome
where you are greeted by the following landing page:

As you can see, it wants you to log into your environment. You may notice NetApp have updated the text box to show the password once typed as you can see from the eye icon at the end of the line.

To test this new methodology instead of buying more nodes, (which would have been nice) I removed both a single storage and compute node from their respective clusters and factory reset them. This allows me to test not only the addition of new nodes into existing clusters but also the removal of the DHCP or static IP addressing requirements before deployment.

Once logged in the NDE scale process discovers any and all nodes available and is where you can select which of these you would like to add to your environment.

After agreeing to the VMware EULA, you are asked to provide the VC’s details and then to select the datacentre and cluster you wish to add the node to. These steps are only present if you are adding compute nodes.

After giving the compute node a root password, you are taken to the “Enter the IP and naming details” page.

Finally, NDE scale takes you on to a review screen as these three screenshots (headings fully expanded for visibility) show.

Once reviewed, click the blue “Add Nodes” button. This initialises the now familiar NDE process of setting up NetApp HCI that can be tracked via a progress screen.

The scaling process for the addition of one compute and one storage node took just under half an hour to complete. But the real benefit is the fact that this scaling wizard can set up the ESXi host plus networking and vSwitches as per NetApp HCI’s best practices whilst at the same time adding a storage node into the cluster. That isn’t the quickest thing to do manually, so having a process that does this for you speedily is a huge plus in NetApp’s favour especially if you have multiple hosts. It’s clear to see the influence that the SolidFire team had in this update, with the ease and speed in allowing customers the ability to expand their NetApp HCI environments with NDE scale. I look forward to the features that will be included in upcoming releases of NetApp HCI and if hyperconverged infrastructure is all about speed and scale then this update gives me both in spades.

Setting Sail for Uncharted Waters

Today might be big day in NetApp’s history. Not only is it celebrating the companies 25th year, a 3 quarter in a row of revenue growth, 140% year on year growth in the All Flash Array (AFA) market segment or sitting in no. 2 position on revenue of AFA vendors (IDC). Nor is it just celebrating its SAN market share growing 3.6x faster than its nearest competitor, over 6.4PB of NVMe shipped or it’s SIX IT Brand Pulse awards for its Scale Out File Storage- FlexGroup. Its starting the week with a product announcement.

The 6 IT Brand Pulse awards

And whilst there may be cake and balloons at offices on East Java Drive and Kit Creek Road the company will be focused on moving forward. Today the company takes a step outside the Storage and Data Management field that it has dominated for two and a half decades, and into any area of the IT industry that has generated a lot of interest over the last couple of years that is still relatively new and unmapped the Hyper Converged Infrastructure market.

Now some of you may be saying that NetApp are quite late to the HCI game; and two, what can they possibly bring? Now remember NetApp were late off the blocks with an All Flash Array but look at the opening paragraph again to see just how well that’s now going and for what they can bring to the game then please read on.

Some of you may remember the version of EVO:rail that NetApp brought out a couple of years ago and feel they should stick to doing storage products; but the difference between that and todays launch is the fact that this time NetApp have solely led the development of this product, rather than having to follow a blueprint VMware put together for a wide and varied list of hardware vendors.

First generation HCI solutions were designed with the simplicity of deploying virtualisation technologies in mind, yet with this approach and a race to market they created limitations on performance, flexibility and consolidation. By claiming they can remove application silos by mixed workloads these limitations meant they ultimately failed at scale. These first-generation hardware offerings provided both compute and storage within the same chassis which meant that resources were tied and both requirements had to be scaled in parallel when either became low or exhausted.

NetApp approach the HCI arena and the limitations of current offerings with the Next Generation Data Centre at the core. The 4 key aspects that make up this HCI solution are: Guaranteed Performance, Flexibility and Scale, Automated infrastructure and the NetApp Data Fabric. It provides secure efficient future-proof freedom of choice.

Several of the benefits that people love about SolidFire is its ability to scale with ease; and growth is a key feature of this HCI infrastructure. With the ability to grow compute and storage independently and regardless of what your applications need, you can start small and scale online and on demand with a varying amount of configuration options to satisfy any enterprise environment. This in turn allows you to avoid overprovisioning say compute, whereby incurring unnecessary increased licensing costs, or storage whereby having excessive amounts of flash media present that would be associated with scaling traditional first generation HCI solutions.

Out of the box this solution utilises the NetApp Deployment Engine (NDE) to eliminate the majority of manual steps needed to correctly commission the infrastructure, combined with an intuitive vCenter plugin and also with a fully programmable interface to complement this scalable architecture to be truly a software defined HCI solution.

The all important front bezel

There will be a lot of interest in this enterprise-scale hyper converged infrastructure solution over the coming days and weeks I applaud NetApp for making the move into uncharted territory and I look forward to reading more about it ahead of its launch later in the year; as this solution combined with NetApp’s Data Fabric will honestly allow you to harness the power of the hybrid cloud.