Welcome!

@ThingsExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @ThingsExpo, Microservices Expo, @CloudExpo, @DevOpsSummit

@ThingsExpo: Blog Feed Post

Microservices and Containers By @XebiaLabs | @DevOpsSummit [#DevOps]

I’m personally really excited about the potential of microservices and containers

Before You Go Over the Container Cliff with Docker, Mesos etc: Points to Consider

By Andrew Phillips

As a company making software for Continuous Delivery and Devops at scale, at XebiaLabs we’re pretty much always in discussions with users about the benefits and challenges of new development styles, application architectures, and runtime platforms. Unsurprisingly, many of these discussions right now focus on microservices on the application side and containers and related frameworks such as Docker, Kubernetes, Mesos, Marathon etc. etc. on the platform side.

I’m personally really excited about the potential of microservices and containers, and typically recommend pretty emphatically that our users should research them. But I also add that doing research is absolutely not the same thing as deciding up front to go for full-scale adoption.

container-fallGiven the incredibly rapid pace of change in this area, it’s essential to develop a clear understanding of the capabilities of the technology in your environment before making any decisions: production is not usually a good arena for R&D.

Based on what we have learned from our users and partners that have been undertaking such research, our own experiences (we use containers quite a lot internally) and lessons from companies such a eBay and Google, here are six important criteria to bear in mind when deciding whether to move from research to adoption:

1. Genuine business need

Perhaps the most fundamental question that needs to be answered before deciding to adopt microservices or containers is whether there is a real business problem that needs to be solved…and that cannot satisfactorily be solved with your existing approaches or technologies.

Microservices and containers are new, fast-moving and still not very well understood, all of which represents a risk factor that needs be weighed up by some concrete benefit for your teams and organization.

I can’t say it better than Etsy’s former principal engineer Dan McKinley:

[Consider] how you would solve your immediate problem without adding anything new. First, posing this question should detect the situation where the “problem” is that someone really wants to use the technology. If that is the case, you should immediately abort.

2. Engineering know-how

If you are clear that microservices and/or containers do indeed promise to solve a problem that you can’t address in other ways, check that you have access to expert platform engineering resources, because you will need them.

It’s not just that most of the APIs and frameworks that people are looking at are pretty much brand new: getting a container-based platform up and running in production means solving many “adjacent” problems that the current frameworks aren’t even intended to address: optimizing networking, deciding on storage strategies, handling backups and failover, dealing with security etc. etc.

3. Willingness to “learn as you go”

At present, there are many more questions around microservices and containers at any kind of production scale than there are readily accessible answers. Even if you have the right engineering expertise to handle these challenges, you should be prepared for a multi-year period of ongoing experimentation and learning.

At least some the APIs and frameworks you will initially pick will undergo significant, backwards-incompatible changes or even fall by the wayside entirely. You will also need to rip-and-replace others that turn out not to be suitable or mature enough for your scenario. And as regards best practices for everything from operational procedures to app delivery patterns: be prepared to develop these yourself.

4. Microservices != containers

When we talk with users coming from a platform/operations angle, or with those who have heard about Docker or other technologies and want to dive in, we often find a perception that microservices and containers are “basically two sides of the same coin”, and that you need one to do the other.

I’d tend to agree that containers nudge you in the direction of making your deliverables smaller, and so tend to move you away from large, monolitic applications (although I have also seen plenty of multi-gigabyte container images). However, the opposite is definitely a misconception, in my view: it’s perfectly possible to move towards a microservice architecture without using containers as an underlying runtime technology.

In fact, if you’re looking to “microservice-ize” existing applications and are not working in a greenfield environment, it may even make more sense to do so. Sticking with your existing runtime platform (you can easily run tens or hundreds of microservice processes on a server without wrapping them in containers, after all!) takes a big variable out of the “change equation” and so reduces the risk to your project.

5. Handling dependencies

A common definition for a microservice we often hear mentioned is an “independently-deployable unit”, and indeed it is good practice to design your microservices so they can start up successfully without requiring all kinds of other components to be available. But in the vast majority of cases, “no microservice is an island”: a single service may boot up and respond to certain API calls on its own, but in order to handle scenarios that are actually useful to the user, you typically need multiple services to be available and talking to each other.

For example, an order service should be able to start and tell you how many orders are open on its own, but if you actually want to simulate a user browsing through your catalogue, picking some items, completing a purchasing and tracking an order through to completion, you’ll need a whole bunch of services to be running.

If you’re looking to implement microservices using containers, the available frameworks are providing increasing levels of support for this. Indeed, Kubernetes, Helios, Marathon, Fig a.k.a Docker Compose and similar other container orchestration tools have been created largely to handle container dependencies and links.

Still, the state of the art in terms of runtime/microservice dependency management, and especially visualization, is way behind what we have for build-time dependencies (which, from what we’re seeing, is one of the reasons many of our users are interested in the new dependency management features coming in XL Deploy). This is an area you will likely need to tackle yourself, at the very least by augmenting the capabilities of exising tools.

6. Beyond “hello world”

One of the main reasons why particularly Docker comes up in many of our recent conversations, above and beyond the general buzz, is that the “hello world” experience is truly great. Getting a sample application in your language of choice running in a container is a very simple, rewarding experience, and even taking the next steps of adding some tweaks is easy to get right.

However, getting real applications running in production in a container enviroment is a totally different thing, especially if you’re moving towards microservices. Not only is building your own PaaS (which is effectively what you’ll be doing) a hard engineering challenge, there’s a whole bunch of process-related questions that need to be addressed as well.

I talked about what we consider to be the most important questions in a previous blog post (there’s overlap with some of the topics discussed here). Coming up with approaches to deal with them should be part of your microservice and container research.

Summary

In short, I’d say that microservices and containers should definitely be on your tech research agenda (and we can hopefully help with some of the challenges you’ll encounter with the upcoming microservice- and container-related features in XL Test, XL Release and XL Deploy).

Before you decide to push ahead with any kind of adoption, ensure that you understand the challenges and are aware of the investment in time and resources that will be required…and ensure you have a genuine business need that justifies the effort and risk.

The post Before You Go Over the Container Cliff with Docker, Mesos etc: Points to Consider appeared first on XebiaLabs.

Read the original blog entry...

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...