Welcome!

@ThingsExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: FinTech Journal, @CloudExpo, @ThingsExpo, WebRTC Summit

FinTech Journal: Article

WebRTC's Impact on Testing | @ThingsExpo #IoT #M2M #API #RTC #WebRTC

Web Real Time Communications (WebRTC) is changing the way we have traditionally communicated and collaborated

WebRTC and Its Impact on Testing

By Nikhil Kaul

Web Real Time Communications (WebRTC) is changing the way we have traditionally communicated and collaborated. To be specific, the technology allows developers to embed voice, data, instant messaging, and video into web browsers, thereby providing easier and far efficient ways to communicate than Voice Over Internet Protocol (VOIP) services such as Skype, WhatsApp, and WebEx.

WebRTC has come a long way since its inception in May 2011. Right from achieving interoperability between Chrome and Firefox browsers in 2013 to rolling out support for Android mobile, WebRTC has continued to garner more attention every year. And the market momentum is expected to continue growing. Infact, a recent Analysts Mason report predicts that with Apple and Microsoft incorporating WebRTC in their browsers, there might be 7 billion devices supporting WebRTC by 2020. With that strong growth rate, it is imperative for testers to have a strategy in place in order to test WebRTC applications efficiently. But before we venture down that path, it is critical to understand what's driving this growth and how testing a WebRTC is different from any other web applications.

WebRTC-enabled endpoints by device type, worldwide, 2013-2020 [Source: Analysys Mason, 2014]

WebRTC-enabled endpoints by device type, worldwide, 2013-2020 [Source: Analysys Mason, 2014]

Why WebRTC is gaining attraction? And why should you care??

  • WebRTC gives web developers an access to Voice over Internet market: WebRTC opens up the VOIP market to web developers. Web developers no longer need proprietary technology to build solutions for the VOIP market. With WebRTC being open source and free to use, the barriers of entry for the VOIP market have been drastically reduced. This means we would see more developers using WebRTC to build real-time communication apps, therefore testers need to be ready.
  • WebRTC makes it extremely easy for end-users to communicate: From the demand side perspective, there seems to be no issue as well. WebRTC makes the process of communicating much more seamless. Unlike VOIP services where consumers need to download and regularly update applications such as Skype or WhatsApp, WebRTC allows users to make calls just through a web page. The process is thereby friction-less, eliminating hurdles for user adoption.
  • Increased number of use cases for WebRTC are emerging: WebRTC is more than real-time audio and video. In fact, existing use cases include peer-to-peer speedier file transfer through RTCDataChannel JavaScript APIS of WebRTC.

Even for audio and video applications, a key advantage of WebRTC is that it allows developers to create more engaging and immersive real-time communications than existing VOIP services such as skype and what's app. Take the example of ustyme, which allows end users to play games and read books while interacting with each other on video.  Moreover, real-time video capabilities of WebRTC are now being integrated into a wide range of verticals, including business, medical, and education.

  • Browser support for WebRTC continues to grow: Increased numbers of browsers are incorporating WebRTC. WebRTC infact now comes preinstalled with Chrome and Mozilla. Additionally, as WebRTC makes strides with standardization and as market continues to mature, Microsoft and Apple are expected to roll out support for WebRTC.

How WebRTC is different than testing current web applications?

  • Growing Role of APIs in Web Apps: Web RTC applications are predominantly being driven by APIs. Hence, while testing a WebRTC application,youc an no longer just base your test cases on the Graphical User Interface (GUI). As a tester, one needs to factor in how request and responses of the API impact the GUI.

Three specific JavaScript APIs dictate the information present on a WebRTC GUI. These include getUserMedia, RTCPeerConnection, and RTCDataChannel. The getUserMedia  API allows web browser an access to phones camera, microphone, or screen.   RTCPeerConnection on the other hand helps determine signaling state of the connection.  And finally, RTCDataChannel API supports sending data whether audio and video across browsers.  As you can see, test cases on the GUI would be governed by these three APIs.

How to test WebRTC applications?

In order to test a WebRTC application effectively, an integrated GUI and API testing solution is required. Biasing your tests just on the GUI is bound to fail as the video, file, or even audio returned by the API call is bound to change based on the request and response. For instance, if you want to check whether video stream works properly on two browsers, the testing steps would require testing both the GUI and API layer. Some of these steps could be:

  1. Allow automated testing solution to gain an access to the camera for video stream
  • Testers would typically use the getUserMedia API for this action
  1. Start the two browsers and record actions using record and replay action
  1. Wait for the two browsers to get connected using the RTCPeerConnection API
  1. Check if connection state has established using the event iceconnectionstatechanged
  1. Once connection is established, check if the UI of browsers display the video stream properly

As seen in the above example, writing tests just on the GUI could be really brittle, especially when the GUI information changes based on getUserMedia APIresponse.  As a tester, you thus need to factor in response of the APIs while designing GUI tests for WebRTC apps.

As it stands, WebRTC is bound to grow. Testers therefore need to be better prepared to test these applications. Proactively incorporating practices that help test both the GUI and API layer could be critical to rolling out these applications bug-free to the marketplace.

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...