Welcome!

@ThingsExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security

@CloudExpo: Article

Hyper Converged Infrastructure | @CloudExpo #BigData #IoT #API #DevOps

Hyper converged is not meant for a complex performance sensitive environment

Hyper Converged Infrastructure, a Future Death Trap

In late 1990s, storage and networking came out of compute for a reason. Both networking and storage need  some specialized processing and it doesn't make sense for all the general purpose servers doing this job. It is better handled by the specialized group of dedicated devices. The critical element in the complete data center infrastructure is data. It may be better to keep this data in the special devices with the required level of redundancy than spreading across the entire data centers. However, hyper convergence emerged for a noble cause of ease deployment for a very small scale branch office scenarios since it  is always complex to setup and operate traditional SAN. The real problem starts when we attempt to replicate this layout into large scale environment with the transactional workload. Three  predominant issues can hit the hyper converged deployments hard and it can spell a death trap.  While sophisticated IT houses know these problems and stay away from the hyper convergence, but others can fall prey to this hype cycle.

Performance Nightmares
Everybody  jumped on to virtualization way before the complete virtualization stack was ready with respect to compute, network and storage. Many of them were struggling to isolate their problems among these three components. Some intelligent lot realized that storage level IO contentions are the root cause of most of their performance related issues and looking for the new class of storage products guaranteeing performance at volume and VM level. Just imagine the magnitude of complexity if all these 3 components are put together in the form of hyper convergence and each IO needs to touch these multiple general purpose servers to complete one application level transaction in the loaded environment. Some of the issues may not surface, when the infrastructure is not loaded.

To make the things worse, during economic downtime which is due for some time, data doesn't stop growing but IT budget stops. IT houses tend to load the existing hardware infrastructure to the maximum level during such time. While all these performance issues related to  the misfit architecture pops up, further cost cutting will kick in to reduce the IT headcount. Isn't it a real death trap which CIOs of cloud providers and enterprise need to avoid.

Hardware Refresh
It is common for the storage vendors to just replace the  storage head  as part of the refresh cycle while data stays intact in the separate shelves. This refresh cycle will be complex if the data is distributed across all the devices in the form of internal disks across the data centers. And refresh cycles for compute and storage is different - disks stay longer than typical servers. In the hyper converged case, we need to replace everything at one time. This requires a tremendous amount of IT staff hours. Worse if this comes in the middle of an economic crisis.

Storage Expansion
If there is a need for a storage expansion, the customer ended up buying expensive servers in the hyper converged environment. Some Web scale companies are already facing this problem and moving the storage out of the server.

At the outset, Hyper convergence looks to be an attractive option seemingly providing lot of flexibility. In reality, it comes with so many limitations and curtails the flexibility to grow the resources independent of each other. In addition, a performance nightmare bound to hit once the system gets loaded.

More Stories By Felix Xavier

Recognized as one of the top 250 MSP thought and entrepreneurial leaders globally by MSPmentor, Felix Xavier has more than 15 years of development and technology management experience. With the right blend of expertise in both networking and storage technologies, he co-founded CloudByte.

Felix has built many high-energy technology teams, re-architected products and developed core features from scratch. Most recently, Felix helped NetApp gain leadership position in storage array-based data protection by driving innovations around its product suite. He has filed numerous patents with the US patent office around core storage technologies.

Prior to this, Felix worked at Juniper, Novell and IBM, where he handled networking technologies, including LAN, WAN and security protocols and Intrusion Prevention Systems (IPS). Felix has a master’s degrees in technology and business administration.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...