Welcome!

IoT Expo Authors: Elizabeth White, Roger Strukhoff, Pat Romanski, Liz McMillan, Kevin Benedict

Related Topics: DevOps Journal, Java, Wireless, Linux, Web 2.0, Big Data Journal, IoT Expo

DevOps Journal: Article

Using Docker For a Complex "Internet of Things" Application

The goal of any DevOps solution is to optimize multiple processes in an organization

View Aaater Suleman's @ThingsExpo sesion here

The goal of any DevOps solution is to optimize multiple processes in an organization. And success does not necessarily require that in executing the strategy everything needs to be automated to produce an effective plan. Yet, it is important that processes are put in place to handle a necessary list of items.

Flux7 is a consulting group with a focus on helping organizations build, maintain and optimize DevOps processes. The group has a wide view across DevOps challenges and benefits, including:

  • The distinct challenge of a skills shortage in this area and how organizations are coping to meet demands with limited resources.
  • The technical requirements: From stacks to scripts, and what works.
  • The practical and political challenges: Beyond the stacks and the human element is a critical success factor in DevOps.

Recently at Flux7, we developed an end-to-end Internet of Things project that received sensor data to provide reports to service-provider end users. Our client asked us to support multiple service providers for his new business venture. We knew that rearchitecting the application to incorporate major changes would prove to be both time-consuming and expensive for our client. It also would have required a far more complicated, rigid and difficult-to-maintain codebase.

We had been exploring the potential of using Docker to set up Flux7's internal development environments, and, based on our findings, believed we could use it in order to avoid a major application rewrite. So, we decided to use Docker containers to provide quick, easy, and inexpensive multi-tenancy by creating isolated environments for running app tier multiple instances for each provider.

What is Docker?
Docker provides a user-friendly layer on top of Linux Containers (LXCs). LXCs provide operating-system-level virtualization by limiting a process's resources. In addition to using the chroot command to change accessible directories for a given process, Docker effectively provides isolation of one group of processes from other files and system processes without the expense of running another operating system.

In the Beginning
The "single provider" version of our app had three components:

  1. Cassandra for data persistence, which we later use for generating each gateway's report.
  2. A Twisted TCP server listening at PORT 6000 for data ingestion from a provider's multiple gateways.
  3. A Flask app at PORT 80 serving as the admin panel for setting customizations and for viewing reports.

In the past, we'd used the following to launch the single-provider version of the app:

12: nohup python tcp_server.py & # For firing up the TCP server.nohup python flask_app.py & # For firing up the admin panel

view rawsingle-provider-launch.sh hosted with ❤ by GitHub

Both code bases were hard coded inside the Cassandra KEYSPACE.

Our New Approach
While Docker is an intriguing emerging technology, it's still in the early stages of development. As might be expected, it has issues remaining to be resolved. The biggest for us was that, at this point, Docker can't support multiple Cassandra instances running on a single machine. Consequently, we couldn't use Cassandra to provide multi-tenancy. Another issue for us was that hosting multiple database instances on a single machine can quickly cause resource shortages. We addressed that by implementing the solution in a fairly traditional way for making an application multi-tenant. We used KEYSPACE as the namespace for each provider in the data store. We also made corresponding code changes to both the data ingestion and web servers by adding the keyspace parameter to the DB accesses. We passed the Cassandra KEYSPACE (the provider ID) to each app instance on the command line, which makes it possible to use custom skins and other features in the future. Thus, we were able to create a separate namespace for each provider in the data store without making changes to the column family schema.

The beauty of our approach was that, by using Docker to provide multi-tenancy, the only code changes needed to make the app multi-tenant were those described above. Had we not used Docker in this way, we'd have had to make major code changes bordering on a total application rewrite.

How We Did It

Docker diagram 1.jpg

First, we created a Docker container for the new software version by correctly setting up all of the environments and dependencies. Next, we started a Cassandra container. Even though we weren't running multiple instances of Cassandra, we wanted to make use of Docker's security, administrative and easy configuration features. You can download our Cassandra file from our GitHub here.We used a locally running container serving at PORT 9160 BY using this command:

1

docker run -d -p 9160:9160 -name db flux7/cassandra

view rawCassandra Container hosted with ❤ by GitHub

We then created a keyspace "provider1" using pycassaShell.

We fired up our two code bases on two separate containers like this:

12

docker run -name remote_server_1 -link db:cassandra -p 6001:6000 flux7/labs python software/remote_server.py provider1docker run -name flask_app_1 -link db:cassandra -p 8081:80 flux7/labs python software/flask_app.py provider1

view rawCode base launch in container hosted with ❤ by GitHub

Voila! We had a provider1 instance running in no time.

Automation
We found Docker-py extremely useful for automating all of these processes and used:

12345678910111213141516171819202122232425

# Yes. We love Python!def start_provider(provider_id, gateway_port, admin_port ):docker_client = docker.Client(base_url='unix://var/run/docker.sock'
version='1.6'
timeout=100) # start a docker container for consuming gateway data at gateway_portstart_command = 'python software/remote_server.py ' + provider_idremote_server = docker_client.create_container('flux7/labs', # docker image
command=start_command, # start command contains the keyspace parameter, keyspace is the provider_id
name='remote_server_' + provider_id, # name the container, name is provider_id ports=[(6000, 'tcp'),]) # open port for binding, remote_server.py listens at 6000docker_client.start(remote_server,
port_bindings={6000: ('0.0.0.0', gateway_port)},
links={'db': 'cassandra'}) # start a docker container for serving admin panel at admin_portstart_command = 'python software/flask_app.py ' + provider_idremote_server = docker_client.create_container('flux7/labs', # docker image
command=start_command, # start command contains the keyspace parameter, keyspace is the provider_id
name='admin_panel_' + provider_id, # name the container, name is provider_id
ports=[(80, 'tcp'),]) # open port for binding, remote_server.py listens at 6000docker_client.start(remote_server,
port_bindings={80: ('0.0.0.0',admin_port)},
links={'db': 'cassandra'})

view rawmulti-tenant-docker.py hosted with ❤ by GitHub

To complete the solution, we added a small logic to allocate the port for newly added providers and to create Cassandra keyspaces for each one.

Conclusion
In the end, we quickly brought up a multi-tenant solution for our client with the key "Run each provider's app in a contained space." We couldn't use virtual machines to provide that functionality because a VM requires too many resources and too much dedicated memory. In fact, Google is now switching away from using VMs and has become one of the largest contributors to Linux containers, the technology that forms the basis of Docker. We could have used multiple instances, but then we'd have significantly over allocated the resources. Changing the app also would have added unnecessary complexity, expense and implementation time.

At the project's conclusion, our client was extremely pleased that we'd developed a solution that met his exact requirements, while also saving him money. And we were pleased that we'd created a solution that can be applied to future customers' needs.

More Stories By Aater Suleman

Aater Suleman, CEO & Co-Founder at Flux7, is an industry veteran in performance optimization on servers and distributed systems. He earned his PhD at the University of Texas at Austin, where he also currently teaches computer systems design and architecture. His current interests are in optimizing DevOps and reducing cloud costs.

Latest Stories from IoT Journal
Predicted by Gartner to add $1.9 trillion to the global economy by 2020, the Internet of Everything (IoE) is based on the idea that devices, systems and services will connect in simple, transparent ways, enabling seamless interactions among devices across brands and sectors. As this vision unfolds, it is clear that no single company can accomplish the level of interoperability required to support the horizontal aspects of the IoE. The AllSeen Alliance, announced in December 2013, was formed with the goal to advance IoE adoption and innovation in the connected home, healthcare, education, automotive and enterprise. Members of this nonprofit consortium include some of the world’s leading, consumer electronics manufacturers, home appliances manufacturers, service providers, retailers, enterprise technology companies, startups, and chipset manufacturers. Initially based on the AllJoyn™ open source project, the AllJoyn software and services framework will be expanded with contributions from member companies and the open source community.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. In her session at 6th Big Data Expo®, Hannah Smalltree, Director at Treasure Data, to discuss how IoT, Big Data and deployments are processing massive data volumes from wearables, utilities and other machines.
Larry Ellison turned 70 and has decided to turn over the CEO reins at Oracle. Safra Catz and Mark Hurd, both in their 50s, will function as a “Ms. Inside and Mr. Outside” as co-CEOs, at least for awhile. Serious reverberations will be felt within this highly competitive company and the highly competitive industry in which it makes its money. Even while guiding his yacht to an America's Cup title, Larry Ellison remained in firm control of the company he founded in 1977. He still has an ownership stake of about 20% of the company--1 billion or so shares of Oracle stock worth about $40 billion. Who can imagine that he'll be a docile, passive Chairman? Yes, he is returning as Chairman, with Jeff Henley, currently in that role, moving aside to be Vice-Chairman. Ellison reports he will also serve as Chief Technology Officer. So it's clear he's not fading from the scene. But he will not be able to micromanage the company by any measure. What Does It Mean? Think of all of the very strong executives over the years who rose quickly and highly in Oracle, only to be banished from the kingdom and/or to start their own big companies. Ray Lane, Marc Benioff, and Tom Siebel spring i...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, will describe how to revolutionize your architecture and create an integrated, interoperable, reliable system of thousands of devices. Using real-world examples, James will discuss the transformative process taken by companies in moving from a two-tier to a three-tier topology for IoT implementations.
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the “Internet of Things” (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, will discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of data, and how to best approach deploying an IoT solution that will drive results.
I write and study often on the subject of digital transformation - the digital transformation of industries, markets, products, business models, etc. In brief, digital transformation is about the impact that collected and analyzed data can have when used to enhance business processes and workflows. If Amazon knows your preferences for particular books and films based upon captured data, then they can apply analytics to predict related books and films that you may like. This improves sales. This is a simple example, but let me tell you what I learned yesterday in sunny and warm San Francisco about more complex applications.
IoT is still a vague buzzword for many people. In his session at Internet of @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, will discuss the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. The presentation will also discuss how IoT is perceived by investors and how venture capitalist access this space. Other topics to discuss are barriers to success, what is new, what is old, and what the future may hold.
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things’ get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What’s driving this increase? The growth in volume is largely attributed to the rollout of new services and applications along with expanding migration to the cloud and traffic spikes. The Internet of Things will also place a strain on DNS services. Are you ready for this surge of new services and applications along with potential DNS threats?
Building low cost wearable devices can enhance the quality of our lives. In his session at Internet of @ThingsExpo, Sai Yamanoor, Embedded Software Engineer at Altschool, will provide an example of putting together a small keychain within a $50 budget that educates the user about the air quality in their surroundings. He will also provide examples such as building a wearable device that provides transit or recreational information. He will review the resources available to build wearable devices at home including open source hardware, the raw materials required and the options available to power such wearable devices.
Where historically app development would require developers to manage device functionality, application environment and application logic, today new platforms are emerging that are IoT focused and arm developers with cloud based connectivity and communications, development, monitoring, management and analytics tools. In her session at Internet of @ThingsExpo, Seema Jethani, Director of Product Management at Basho Technologies, will explore how to rapidly prototype using IoT cloud platforms and choose the right platform to match application requirements, security and privacy needs, data management capabilities and development tools.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at Internet of @ThingsExpo, Erik Lagerway, Co-founder of Hookflash, will walk through the shifting landscape of traditional telephone and voice services to the modern P2P RTC era of OTT cloud assisted services.
We were in contact recently with Shrikant Pattathil (pictured below), Executive Vice President of Harbinger Systems. Here are some of his thoughts about healthcare, the IoT, and disruption: IoT Journal: Healthcare, with all of its systems and dataflows, seems an ideal area for IoT solutions. What is Harbinger Systems doing in this area? Shrikant Pattathil: Being a service provider we work with many product development companies who are building new IoT-based applications to solve problems that plague the healthcare industry. For example, there is a need for applications to manage your medicine dosage, seek help, and notify your care provider. IoT Journal: And how do you go about addressing these problems? Shrikant: We are approaching IoT from mobile and cloud perspective. These are our key strengths. We are helping product companies in IoT space to quickly build the mobile interfaces for their product offerings. We are also helping them to place the data on the cloud in a secure way, so that they can truly exploit the benefits of IoT. IoT Journal: What are the advantages of the IoT here? Cost? Better care? What sorts of metrics can be applied, and are there intangibles as ...
The Internet of Things (IoT) is rapidly in the process of breaking from its heretofore relatively obscure enterprise applications (such as plant floor control and supply chain management) and going mainstream into the consumer space. More and more creative folks are interconnecting everyday products such as household items, mobile devices, appliances and cars, and unleashing new and imaginative scenarios. We are seeing a lot of excitement around applications in home automation, personal fitness, and in-car entertainment and this excitement will bleed into other areas. On the commercial side, more manufacturers will embed sensors in their products and connect them to the Internet to monitor their performance and offer pro-active maintenance services. As a result, engineers who know how to incorporate software and networking into their mechanical designs will become more in demand.
Launched this June at the Javits Center in New York City with over 6,000 delegate attendance, the largest IoT event in the world, 2nd international Internet of @ThingsExpo will take place November 4-6, 2014, at the Santa Clara ConventionCenter in Santa Clara, California with estimated 7,000 plus attendance over three days. @ThingsExpo is co-located with 15th international Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading IoT industry players in the world. In 2014, more than 200 companies will be present at the @ThingsExpo show floor, including global players, and hottest new technology pioneers.
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With “smart” appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user’s habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can’t be addressed without the kinds of agile software development and infrastructure approaches pioneered by the DevOps movement.