Explosive Growth of Vblock, 2011-2016
VCE = The Virtual Computing Environment company, not VMware, Cisco, and EMC! “ Really!”
Did you miss Part 1 of our history series? Check it out here.
From its inception, VCE, and Vblock, has been a highly disruptive offering. As such, it took some time for these disruptions to gain traction in the market and definitively answer the question, “Can they cross the Chasm?” (h/t Geoffrey Moore).
By early 2012, the case had been made for Vblock and significant traction had been gained within Fortune 100 companies, fueling explosive revenue growth. From 2012 to 2015, VCE doubled revenues every year for three years, setting a record for the fastest company in IT to go from $0 – $1B in revenue. VCE created the market for Converged Infrastructure and, at the time, was the thought and execution leader in that space. In effect, VCE and Vblock were the Converged Infrastructure market.
The growth in and acceptance from customers allowed VCE to further invest in the platform. The 300/700 series were introduced and delineated by the associated EMC storage array coupled with Cisco UCS compute. Soon, the 500 series became important as the economics and reliability of flash-based storage systems were proven to be enterprise-grade reliable (XtremIO being the EMC flagship in this space). Then, in March 2015, VxBlock was introduced, offering VMware NSX virtual networking capability alongside the already supported Cisco ACI virtual networking.
Throughout this period, VCE was balancing strict standardization with listening to customer requests and customer needs for variation/flexibility in their mission critical infrastructure. A recurring question arose: “Where does the line for Lifecycle Management responsibility get drawn?” Fully supported lifecycle management, with the Release Certification Matrix (RCM), was perhaps the keystone value proposition delivered by Vblock. As customers adopted Vblock Systems and became comfortable placing workloads on them, more and more were purchased as customers embraced infrastructure standardization.
Managing these “Islands” of Vblock Systems at scale became a new challenge and pain point for customers. These IT Transformation Visionaries quickly forgot how painful and complicated the care and feeding of traditional three-tier infrastructure was. They soon expected the ease of use and simplicity experienced with their first few Vblock Systems to continue as they scaled out to multiple and tens-of multiples of systems. Additionally, key to the Vblock process was the collection of all required data for the Logical Configuration Survey (LCS), which defined the logical configuration of the Vblock. As important as this information was for the build process, it was not designed to be repeatably collected from the same organization. This became a customer-raised opportunity to be addressed.
VCE made it a priority to simplify the management of multiple Vblocks for customers. For large scale data center deployments, connecting “Islands of Vblocks” into flexible pools of resources sparked the creation of the Vscale architecture. Vscale is a data center network-wide architecture, logically grouping Vblocks and other pools of IT resources into zones. These zones are then managed under a common RCM (release certification matrix), thus alleviating many of the challenges of inter-data center management.
Concurrently, VCE attempted to create a software platform that simplified management of the Vblock elements into a single, easy to use, interface, known as VCE Vision. VCE Vision leveraged systemwide REST APIs, with a focus on being management platform agnostic. Many of the features and functionality of VCE Vision were eventually incorporated into VxBlock Central.
Meanwhile, In the General IT Industry, Broader Movements…
Whilst the success of the Vblock product line can never be disputed, changes in the IT market meant that a competitive technology started to appear that had the potential to disrupt the new status quo of engineered systems: hyperconverged technology. The specific disruption was the virtualization of the storage layer and effectively, the introduction of a new class of storage product that was entirely software defined. The early name that was setting the pace was a small San Jose based startup called Nutanix. In response to Nutanix, VMware released its vSAN product, and followed in 2014 with their EVO:RAIL architecture – inviting vendors to build out a hardware solution combining vSAN, compute, and network coupled with the VMware hypervisor. This led to the creation of the EMC derived VSPEX Blue platform, which in turn led to the development of today’s industry and market share leader, VxRail.
These developments increasingly shifted the emphasis in Enterprise IT environments to thinking about the Software Defined Data Center (SDDC), which in turn was a response to the other more seismic shift that was happening across the broader industry: the rise and adoption of the public cloud as a replacement for on-premise infrastructure solutions.
AWS had established a highly credible cloud platform, which became the platform of choice for applications developed using native cloud principles – i.e. applications that were lightweight, developed using agile software methodologies, and increasingly designed to deliver a mobile experience. Frequent application updates were the norm, and app usage patterns were unpredictable or highly variable (think advertising during the World Cup, Olympics, or Superbowl). All it took was a credit card to get your infrastructure up and running – no tickets, no ITIL, no red tape, and no waiting for your request. True, your choice of configurations was more Henry Ford-esque (ala, “Any color you want, as long as it’s black”), but developers gladly traded choice for speed and ‘good enough.’ “Shadow IT” was emboldened and arguably spread like wildfire.
However, mission critical applications that required guaranteed performance SLAs, alongside bulletproof reliability and availability, were still the mainstay of on-premise solutions. Particularly Converged Infrastructure, as Hyperconverged had yet to prove it was capable of delivering the functionality needed to support these application requirements. In the mission critical application environment, change was slow and carefully planned: ITIL was the established norm. Application updates were infrequent (“If it ain’t broke, don’t fix it” could have been the mantra) and usage patterns stable and predictable.
The stark differences in the operating models gave rise to the term “bi-modal IT” (Source Gartner), and led to many IT organizations embarking on significant change programs with the specific aim of aligning resources to best support this bi-modal approach. By the time EMC acquired the assets, CPSD offerings were integrated into the Dell EMC portfolio.
In February 2018 Dell EMC launched the next evolution of Vblock, the VxBlock 1000, which introduced a system-wide architecture that supported levels of flexibility and configurations options. This included full RCM and lifecycle management benefits, previously never seen.
Bringing us to the present day, VxBlock 1000’s perpetual architecture facilitates the continual evolution of storage arrays in line with the latest products available, including unstructured data solutions such as Isilon. VxBlock Central now provides the highest levels of insight and configurability, so that management of a VxBlock 1000 is as close to being delivered in one place as possible.
Vscale has evolved to a datacenter wide network architecture that encompasses support for all types of underlying infrastructure, allowing for the creation and lifecycle management of both converged and hyperconverged technologies. Standalone servers and storage pools can also be connected into a Vscale architecture – although for these standalone servers and storage pools the lifecycle management is performed separately.
At the time of writing, early 2021, the market for Cloud Infrastructure remains robust. Many enterprises have taken the view that a Multi-Cloud approach is the optimal architecture for supporting both the traditional mission critical monolithic applications and new, cloud-native applications. The final blog in this series will explore where we see future trends and, based on our experience, what lessons learned from the recent past show how current data center investments remain relevant and valuable.