What’s to Come, From What We’ve Learned

Did you miss the first two parts of our CI history series? Catch up on Part 1 and Part 2 today.

Pandemic Accelerates Infrastructure Demand

When the COVID-19 pandemic began, enterprise IT came under huge pressure to deliver the usual suite of services, but suddenly within a highly distributed organizational model. In addition to maintaining essential applications and services, projects appeared literally overnight that demanded a new, secure approach to remote working. Pandemic lockdowns around the world accelerated and validated that most workers can be productive outside of the office, provided the required IT infrastructure is in place.

Looking ahead, offices of the next decade will transform. Of course, many employees (depending on their roles and industries) will still need to work on-site. However, many other roles will be reassessed, opening up work-from-home and other remote options.

Another key to enabling the remote, dispersed workforce is using software to automate more IT tasks. The associated technologies of storage, server, networking, and virtualization are increasingly using artificial intelligence (AI) and machine learning (ML) techniques to optimize performance, predict usage, and dynamically allocate resources. The amount of operational data collected and analysed also needs to be represented in a holistic visual style. Dell Technologies software, such as WaveFront and CloudIQ, illustrate important advances in this arena.

The Importance of Infrastructure Automation

Supporting the evolving workplace will take a strategic balance between on-premise private cloud and the broader, enterprise-wide multi-cloud strategy. One key consideration: does the planned technology support forthcoming application development or line of business investment? Deploying a blend of on-premise and public cloud solutions helps mitigate these very real concerns about data privacy, regulation, and compliance. It’s now well-understood that on-premises technologies can mimic the experience and capabilities of the public cloud, and, in many cases, be delivered at a comparable or lower cost.

Now that many IT organizations are focusing on both data and enterprise data management, they’re looking at how to drive the best result using a multi-cloud approach when engineering data warehouses, data lakes, and associated repositories. Ongoing innovations introduced as part of VxBlock converged infrastructure show why VxBlock is well positioned to provide a set of bedrock services that IT professionals can rely on to help them deliver the capabilities that are increasingly needed by IT shops globally.

VxBlock Systems have always been at the leading edge of innovation in private cloud adoption. Upgradability and flexibility, while adhering to a rigorous set of engineering standards, definitively sets the tone for future enhancements. VxBlock is uniquely designed to accommodate traditional technologies on the same platform as modern workloads with demanding application performance that are increasingly requiring large memory footprints, with very low latency storage technologies, along with high levels of automation.  VxBlock 1000 is architected to facilitate upgrades to both the LAN and SAN fabrics – multigigabit LAN and NVMe respectively – as well as cost-effective migration of other components from previous VxBlock models.

Converged infrastructure was ground-breaking a decade ago when it was introduced, and it delivers foundational services on which many IT applications and workloads continue to run.  Without CI there would be no HCI, and many organizations would have found private cloud operating models very difficult to run and manage. In conclusion, we believe CI will continue to be a workhorse for forward-thinking IT departments for many years to come.

Explosive Growth of Vblock, 2011-2016

VCE = The Virtual Computing Environment company, not VMware, Cisco, and EMC! “ Really!

Did you miss Part 1 of our history series? Check it out here.

From its inception, VCE, and Vblock, has been a highly disruptive offering.  As such, it took some time for these disruptions to gain traction in the market and definitively answer the question, “Can they cross the Chasm?” (h/t Geoffrey Moore).

By early 2012, the case had been made for Vblock and significant traction had been gained within Fortune 100 companies, fueling explosive revenue growth.  From 2012 to 2015, VCE doubled revenues every year for three years, setting a record for the fastest company in IT to go from $0 – $1B in revenue.  VCE created the market for Converged Infrastructure and, at the time, was the thought and execution leader in that space.  In effect, VCE and Vblock were the Converged Infrastructure market.

The growth in and acceptance from customers allowed VCE to further invest in the platform.  The 300/700 series were introduced and delineated by the associated EMC storage array coupled with Cisco UCS compute.  Soon, the 500 series became important as the economics and reliability of flash-based storage systems were proven to be enterprise-grade reliable (XtremIO being the EMC flagship in this space).  Then, in March 2015, VxBlock was introduced, offering VMware NSX virtual networking capability alongside the already supported Cisco ACI virtual networking.

Throughout this period, VCE was balancing strict standardization with listening to customer requests and customer needs for variation/flexibility in their mission critical infrastructure.  A recurring question arose: “Where does the line for Lifecycle Management responsibility get drawn?”  Fully supported lifecycle management, with the Release Certification Matrix (RCM), was perhaps the keystone value proposition delivered by Vblock.  As customers adopted Vblock Systems and became comfortable placing workloads on them, more and more were purchased as customers embraced infrastructure standardization.

Managing these “Islands” of Vblock Systems at scale became a new challenge and pain point for customers.  These IT Transformation Visionaries quickly forgot how painful and complicated the care and feeding of traditional three-tier infrastructure was. They soon expected the ease of use and simplicity experienced with their first few Vblock Systems to continue as they scaled out to multiple and tens-of multiples of systems.  Additionally, key to the Vblock process was the collection of all required data for the Logical Configuration Survey (LCS), which defined the logical configuration of the Vblock.  As important as this information was for the build process, it was not designed to be repeatably collected from the same organization. This became a customer-raised opportunity to be addressed.

VCE made it a priority to simplify the management of multiple Vblocks for customers. For large scale data center deployments, connecting “Islands of Vblocks” into flexible pools of resources sparked the creation of the Vscale architecture. Vscale is a data center network-wide architecture, logically grouping Vblocks and other pools of IT resources into zones.  These zones are then managed under a common RCM (release certification matrix), thus alleviating many of the challenges of inter-data center management.

Concurrently, VCE attempted to create a software platform that simplified management of the Vblock elements into a single, easy to use, interface, known as VCE Vision.  VCE Vision leveraged systemwide REST APIs, with a focus on being management platform agnostic. Many of the features and functionality of VCE Vision were eventually incorporated into VxBlock Central.

Meanwhile, In the General IT Industry, Broader Movements…

Whilst the success of the Vblock product line can never be disputed, changes in the IT market meant that a competitive technology started to appear that had the potential to disrupt the new status quo of engineered systems: hyperconverged technology. The specific disruption was the virtualization of the storage layer and effectively, the introduction of a new class of storage product that was entirely software defined. The early name that was setting the pace was a small San Jose based startup called Nutanix. In response to Nutanix, VMware released its vSAN product, and followed in 2014 with their EVO:RAIL architecture – inviting vendors to build out a hardware solution combining vSAN, compute, and network coupled with the VMware hypervisor. This led to the creation of the EMC derived VSPEX Blue platform, which in turn led to the development of today’s industry and market share leader, VxRail.

These developments increasingly shifted the emphasis in Enterprise IT environments to thinking about the Software Defined Data Center (SDDC), which in turn was a response to the other more seismic shift that was happening across the broader industry: the rise and adoption of the public cloud as a replacement for on-premise infrastructure solutions.

AWS had established a highly credible cloud platform, which became the platform of choice for applications developed using native cloud principles – i.e. applications that were lightweight, developed using agile software methodologies, and increasingly designed to deliver a mobile experience. Frequent application updates were the norm, and app usage patterns were unpredictable or highly variable (think advertising during the World Cup, Olympics, or Superbowl).  All it took was a credit card to get your infrastructure up and running – no tickets, no ITIL, no red tape, and no waiting for your request.  True, your choice of configurations was more Henry Ford-esque (ala, “Any color you want, as long as it’s black”), but developers gladly traded choice for speed and ‘good enough.’  “Shadow IT” was emboldened and arguably spread like wildfire.

However, mission critical applications that required guaranteed performance SLAs, alongside bulletproof reliability and availability, were still the mainstay of on-premise solutions. Particularly Converged Infrastructure, as Hyperconverged had yet to prove it was capable of delivering the functionality needed to support these application requirements.  In the mission critical application environment, change was slow and carefully planned: ITIL was the established norm. Application updates were infrequent (“If it ain’t broke, don’t fix it” could have been the mantra) and usage patterns stable and predictable.

The stark differences in the operating models gave rise to the term “bi-modal IT” (Source Gartner), and led to many IT organizations embarking on significant change programs with the specific aim of aligning resources to best support this bi-modal approach. By the time EMC acquired the assets, CPSD offerings were integrated into the Dell EMC portfolio.

In February 2018 Dell EMC launched the next evolution of Vblock, the VxBlock 1000, which introduced a system-wide architecture that supported levels of flexibility and configurations options. This included full RCM and lifecycle management benefits, previously never seen.

Bringing us to the present day, VxBlock 1000’s perpetual architecture facilitates the continual evolution of storage arrays in line with the latest products available, including unstructured data solutions such as Isilon.  VxBlock Central now provides the highest levels of insight and configurability, so that management of a VxBlock 1000 is as close to being delivered in one place as possible.

Vscale has evolved to a datacenter wide network architecture that encompasses support for all types of underlying infrastructure, allowing for the creation and lifecycle management of both converged and hyperconverged technologies. Standalone servers and storage pools can also be connected into a Vscale architecture – although for these standalone servers and storage pools the lifecycle management is performed separately.

At the time of writing, early 2021, the market for Cloud Infrastructure remains robust.  Many enterprises have taken the view that a Multi-Cloud approach is the optimal architecture for supporting both the traditional mission critical monolithic applications and new, cloud-native applications.  The final blog in this series will explore where we see future trends and, based on our experience, what lessons learned from the recent past show how current data center investments remain relevant and valuable.

As members of the CONVERGED User Group, you’re likely well-versed in the many technical and business benefits of converged infrastructure (CI) and hyper-converged infrastructure (HCI)—you work with it every day. Last year, CI hit its 10-year milestone, and some of our members have been around CI for the whole time, helping to transform their IT shops as well as drive the growth of CI and HCI; other members may have only recently dipped their toes in the CI/HCI waters.

The CONVERGED User Group Board of Directors thought it might be helpful to take a look back over the past decade to see how CI has evolved. For some this may be a brief glimpse down memory lane, for others this may help explain how we got to where we are, and perhaps provide insight into where we are (should be) going. For all, we hope this blog series is informative and fun. We welcome your thoughts/remembrances!

This first blog in the series looks back to the 2009-2011 timeframe – the early years of CI – and the internal-culture challenge early adopters faced as their IT leadership sought to realize the promise of CI in their organizations.

If you will, think back to 2009. The DJIA spent most of the year below 10,000, Amazon stock was in the 80s, AWS was just three years old, and Facebook was three years until it would announce its IPO. The iPhone was in its second year, and VMware had entered its second decade and just released ESXi 4.0 and vSphere. Oh, and Cisco, the networking behemoth, had just introduced its Unified Compute System, aka, Cisco UCS. Wow! That seems more like eons than a decade ago.

Many IT shops at this time were starting to adopt virtualization widely – “P to V” was “the new black,” or, anachronistically, the “new cloud.” However beneficial to operations this shift to virtualization was, it also added a new layer of complexity to the standard, 3-tier, Compute, Network, Storage, siloed data center operations model.

VMware and Virtualization

Prior to the emergence of CI, VMware introduced the ability to virtualize the compute layer. It may be commonplace now, but in 2009, virtualizing X-86 processors was transformational. Pre-VMware, the ability to run multiple non-competing applications on a single processor, thus achieving higher utilization rates and shifting capacity where and when it’s needed to scale with processing requirements, was limited to IBM mainframes – certainly not Intel-based machines. With VMware, companies could exploit virtualization at the compute layer and begin exploring the subsequent impact to their storage and networking needs.


Vblock – VCE’s ‘Data Center in a Box’

2019 marked the 10th anniversary of Converged Infrastructure, a data center category created by VCE. With Vblock, VCE (then Acadia) introduced the concept of converged infrastructure (CI) as an alternative, all-in-one solution to the traditional 3-tiered architecture.

Vblock represented a “data center in a box” – the first step toward resolving three critical challenges inherent in the 3-tier model:

  • Complexity – Each tier in the 3-tier model is managed by its own set of dedicated specialty resources. These specialized teams are generally siloed and their decision-making insular. Decisions and actions are often made without a holistic understanding of the potential impact on the full system, i.e., the data center.
  • Performance constraints– e.g., the network can become a bottleneck between applications residing on a server and the data associated with those applications at the storage level.
  • Resource efficiency – Enterprises using a 3-tier architecture have to build data center capacity great enough to meet the highest level of demand, which may occur seasonally or even be associated with a single-day event. Day-to-day operations may require only a fraction of that maximum utilization rate as the organization continues to pay for a large volume of unused capacity.

Vblock was a revolutionary concept, providing organizations with a pre-architected, pre-engineered, pre-configured, self-contained, highly-available and reliable enterprise-grade infrastructure built, supported and maintained by a single provider. It facilitated the compute, network and storage components to work in harmony, freeing IT teams from the daily efforts to “keep the lights on”.  Moreover, VCE’s Release Certification Matrix (RCM) rigorously examined and regression tested all manner of network, compute, storage, and virtualization patches and software releases to ensure interoperability. RCM ensured and continues to ensure that systems remained highly available during patching and improved risk management.  Enterprising IT teams took advantage of the new cycles available and looked to implement innovation into their operations and the business.

A Barrier to Acceptance – The Cultural Challenge

As is typical of IT organizations pioneering innovation, the early adopters of CI and virtualization faced internal resistance. Where IT transformation leaders and visionaries saw the strategic, operational and financial advantage, many established infrastructure teams saw disruption, an unwelcome challenge to the status quo and a threat to their jobs.

Some on those teams were comfortable working in their Compute, Network or Storage silos free from a dependence on the other silos. “To them, CI was an internal global attack. It was an attack from the infrastructure perspective because now the infrastructure would be architected like a mainframe, one single system,” says Ignacio Borrero, Technical Marketing Engineer, CI & HCI for Dell Technologies. “They viewed VMware in the same way. The storage guy would now likely have a virtualization administrator managing the instruments, and the network guys didn’t want the virtualization guy making decisions about the network.”

Suddenly, two factors – CI and virtualization – were putting pressure on people living very comfortably in their own spaces. “Some of those people instantly saw the benefits and reoriented themselves. They understood that they would be able to advance their careers and add higher-level value to their organization in a more automated and efficient CI environment. Others felt at risk; they saw their livelihood going away,” Borrero says.

It’s accurate to say that the 2009-2011 time period could be characterized by this culture-gap between visionaries who understood where the industry was heading and those in the trenches who were change-averse. Without a culture change, early adopters would never realize the benefits of a virtualization-based data center or CI.  Even ten years later those same challenges remain eerily similar.  Though virtualization of compute is widely adopted today, virtualized storage, virtualized networking, CI, and now Hyperconverged Infrastructure (HCI) continue to face cultural transformation challenges.

Even as the global CI market is predicted to grow by USD 25.15 billion during the 2020-2023 time period, according to Technavio, the global technology research and advisory company, a large percentage of the CI/HCI market remains untapped.

“Human beings are reluctant to change regardless of how minor that change might be, so it’s certainly understandable for organizations to find it difficult to introduce change to complex data center infrastructure,” Borrero says. “They’re asking if the benefits are worth the effort. They don’t want to take risks, and prefer to move incrementally. Especially in those early years, people I talked to would express interest, but they wanted to wait a year or more to see if CI was real. Only then would they feel comfortable advocating it internally.”

In the second blog in this series, we will examine the 2011-2016 timeframe—the VCE days, Pre-Dell acquisition.

Welcome to the CONVERGED Connect blog! In our community, we are dedicated to ensuring our members are positioned for success with the most relevant information and use-cases that will help them grow, optimize, and strategize using Dell EMC’s converged and hyperconverged solutions. Our intention is to use this platform to highlight our members and their experiences.

Here’s what you can expect from the CONVERGED blog:
• Board member interviews
• Peer testimonials
• Industry blogs
• Videos
• And much more!

The CONVERGED User Group’s goal is that our content will add a new dimension to your CONVERGED experience.

If you would like to share your story or feedback with the CONVERGED team, please contact memberservices@convergedusergroup.com.

© Copyright | Elevate User Group