Welcome!

@ThingsExpo Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan, William Schmarzo

Blog Feed Post

PubSub and Other Disruptive Emerging Network Models

#IoT #OpenStack #SDN

When people write about software-defined architectures being "disruptive" to the network they're doing a bit of a disservice to just how much change is occurring under the hood in the engine that drives today's businesses. The notion of separating control and data planes is superficial in that it describes a general concept and it isn't really all that radical a change, if you think about it.

The control and data planes have always been separate. We have, since the need for web-scale networks came about, implemented separate topological (and usually physical) networks specifically for the purpose of segregating control traffic from the data path. The reasons for this are many: to keep management (control) traffic from interfering with the delivery of applications (and vice versa), to enable a model in which control over the critical path for applications could be secured and to ensure access to necessary control functions in the face of failure or attack.

What's new with software-defined architectures is not just the logical separation but the physical decoupling and change in component responsibility. In traditional networks there is a logically separate control plane, but it is distributed; it resides on each physical component. In software-defined networks it is physically separate but it is centralized; control responsibility resides in a single component, the "controller".

Now, OpenStack and emerging models for scaling systems that will be responsible for managing communication with and for the Internet of Things (like MQTT) use a similar control model, but it's not as active as a software-defined architectural model, it's more a passive model. That's the nature of PubSub imposing itself on the network.

PubSub Control Model

pubsub-modelPubSub (publish / subscribe) is a familiar model to application developers. It's a middleware staple that's been used for a very long time to distribute messages to a variable set of systems. In a nutshell, PubSub is based on the notion of there existing a "queue" to which authorized components can publish events or messages of interest and to which interested components an subscribe. Events or messages have a life (like a TTL) and eventually expire. In the interim, it's expected that subscribed components check the queue for messages periodically. They poll for events or messages.

The queue itself is much like a switch or router's queue, except messages in the queue are duplicated until the TTL runs out and the message expires.

PubSub is passive; that is, it does not actively distribute messages. It merely serves as a kind of centralized repository, making available to those components that need it access to relevant information about the state of applications and/or the network.

Centralized Control Model

push-modelOpenFlow-based SDN, by contrast, is active. That is, it not only serves as a centralized repository for the state of applications and/or the network, but it actively distributes messages to components based on events. For example, a controller might receive a message from another system indicating the launch of a new application instance. That event triggers a series of actions on the controller that includes informing the affected network components of configuration changes. In a passive, PubSub model, the network components themselves might be polling for such an event and, upon receiving one, would initiate the appropriate configuration changes themselves.

We can simplify the description of the differences even more: a controller-based architecture uses a push model, while a pubsub-based architecture uses a pull model. What this doesn't illustrate well is that in a push model, the centralized controller must know how to communicate the desired changes to each and every component it is controlling. That's one of the reasons original models standardized on OpenFlow and were tightly focused on L2-4 stateless networking. It could be easily standardized down to a common forwarding table.While different components might internalize that differently, the basic information was always the same: IP addresses, ports and actions.

As we move up the stack into L4-7 stateful networking, however, this model becomes more burdensome because of the complexity of rule sets and differences in policy models across such a broad set of networking domains. Hence the plug-in support in controllers like OpenDaylight for "other" control protocols. But the basic premise of the model remains the same, regardless of the control protocol: the centralized controller dictates the changes to all components. It pushes those changes to the network. Both control and execution are centralized. The controller tells components to change their configuration.

PubSub centralizes control but decentralizes execution. The control plane is still centralized; there is one authoritative system responsible for disseminating change across the network, but each individual component (or domain controller but we'll get to that in a minute) is responsible for executing the appropriate changes based on their configured policies and services. A pubsub controller never tells a component "change this now"; that's up to the individual components (or domain controller). 

The Integrated Control Model

To make things even more confusing (and disruptive), these models may be used simultaneously. A software-defined architecture might be based on a centralized control model with domain controllers for specific networking functions (like security and application delivery) integrated via a pubsub-based model.

integrated-model

 

This is where we start seeing models that combine emerging technologies like OpenStack and SDN architectures together. OpenStack manages at the data center level, and at its heart is pubsub model that can be used by domain controllers (stateless L2-4 SDN, stateful L4-7 SDN, etc...) to receive notification of changes in the network and subsequently push those changes using the appropriate control protocols to the components it is managing.

Needless to say, the term "disruptive" is really inadequate in describing the level of change in the network required to support either models (or both). Both require significant changes not just to the network itself but the way in which the network is fundamentally provisioned and managed. It's not just a new CLI or management console, these models dramatically change the design and management of networks.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of ...
DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...