@ThingsExpo Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, William Schmarzo

Related Topics: @DXWorldExpo, @CloudExpo, @ThingsExpo

@DXWorldExpo: Blog Feed Post

The Big Data Intellectual Capital Rubik’s Cube | @BigDataExpo #IoT #Cloud #BigData

What are the new Big Data assets that an organization needs to collect, enrich and apply to drive competitive advantage?

The Big Data Intellectual Capital Rubik's Cube

This is another topic that has taken me a long time to write, but several conversations with Peter Burris(@plburris) from Wikibon finally helped me to pull this together. Thanks Peter!

I’ve struggled to understand and define the Intellectual Capital (IC) components – or dimensions – of the new, Big Data organization; that is, what are the new Big Data assets that an organization needs to collect, enrich and apply to drive business differentiation and competitive advantage? These assets form the basis of the modern “collaborative value creation” process and are instrumental in helping organizations to optimize key business processes, uncover new monetization opportunities and create a more compelling, more profitable customer engagement.

To start this discussion, let’s first start with an understanding of the economic value of data as outlined in the article “Determining the Economic Value of Data”:

Data is an unusual currency.  Most currencies exhibit a one-to-one transactional relationship. For example, the quantifiable value of a dollar is considered to be finite – it can only be used to buy one item or service at a time, or a person can only do one paid job at a time. But measuring the value of data is not constrained by transactional limitations.

In fact, data currency exhibits a network effect, where data can be used at the same time across multiple use cases thereby increasing its value to the organization. This makes data a powerful currency in which to invest.”

This currency network effect phenomena – that the more that you share and use it, the more valuable it becomes – is true with analytics as well.  Analytics benefit from the same network effect where the more that you share and use (and provide feedback into) the analytics, the more valuable the analytics become.

Modern Sources of Intellectual Capital
There are three sources of intellectual capital that is output by the modern organization:

  • Data, which includes both internal and external data sources. Examples of internal data might include: point-of-sales transactions, credit card transactions, call detail records, web clicks, consumer comments, work orders, product specifications. While external data might include: weather, traffic, local demographics social media, news feeds, building permits, home values, or property taxes.
  • Analytics, which are really nothing more than combinations of data transformed, enriched and interwoven into analytic results such as scores, association rules, clusters, recommendations, usage patterns and behavioral insights
  • Business Use Cases, which are clusters of decisions in support of an organization’s business initiatives. For example, the decisions supporting the “Increase shopping bag revenue” use case could include: what products to cross-promote, what prices to charge, what customers to target, when to run the promotions, etc.

Data, analytics and use cases are the three new asset or intellectual capital (IC) types that form the foundation of the modern Big Data organization. These three entities create the Real Scrambledthree “dimensions” of our Big Data Intellectual Capital Rubik’s Cube, and how we align those IC dimensions are critical in the organization’s ability to optimize key business processes, uncover new monetization opportunities and create a more compelling user engagement.  If you don’t align these three assets appropriately around your organization’s key business initiatives, you will end up with a confusing Rubik’s Cube of confusing and misaligned data, analytics and use cases (see Figure 1).

To see how an organization can align these three sources of Big Data intellectual capital, let’s go to my favorite guinea pig, er, uh, case study – Chipotle – and discuss how Chipotle could integrate these three Big Data sources of intellectual capital into a “Big Data value creation framework.”

Step 1: Begin with An End In Mind
In order to drive focus and prioritization across the organizations (see the “Big Data Success: Prioritize Important Over Urgent” blog), let’s start by identifying the organization’s key business initiatives.  For our Chipotle example, let’s focus on Chipotle’s key business initiative of “Increase Same Store Sales” (especially in light of Chipotle’s recent e-Coli problems).

Performing some rudimentary financial analysis based upon Chipotle’s 2012 annual report (the annual report that I use as the basis for my University of San Francisco School of Management class), we can calculate that a 7% increase in same store sales is worth about $191M annually for Chipotle (see Table 1).

Let’s build upon this $191M opportunity by brainstorming the different use cases (i.e., clusters of decisions) that the key business stakeholders need to make in support of the “Increase Same Store Sales” business initiative (see “Updated Big Data Strategy Document” blog for the process of identifying the supporting use cases).  The use cases that could support Chipotle’s “Increase Same Store Sales” initiative include:

  • Increase Store Traffic via local events marketing
  • Increase Store Traffic via customer loyalty program
  • Increase shopping bag revenue
  • Increase corporate catering
  • Increase non-corporate catering (schools, churches, gatherings)
  • Increase new product introduction effectiveness
  • Improve promotional effectiveness

Let’s now consider the potential value of each of those use cases vis-à-vis the value of the Chipotle “Increase Same Store Sales” initiative using the techniques discussed in the “Determining the Economic Value of Data” blog.  The brainstormed financial impact of each use case are displayed in Table 2.

Note: We have outlined the use case identification, brainstorming and valuation estimation process in the reference materials listed below (sorry, some assembly required).

Classroom reference materials:

Step 2: Build Analytic Profiles
The next step on the Big Data value creation framework is to build analytic profiles around the organization’s key business entities.  Analytic Profiles are structures (models) that standardize the collection and application of the analytic insights for the key business entities. Analytic Profiles force an organizational discipline in the capture and application of the organization’s analytic efforts to minimize the risk of creating “orphaned analytics,” which are one-off analytics built to address a specific need but lack an overarching model to ensure that the resulting analytics can be captured and re-used across multiple use cases.

Key business entities are the physical entities (e.g., people, products, things) around which we will seek to uncover or quantify analytic insights about that business entity such as behaviors, propensities, tendencies, affinities, usage trends and patterns, interests, passions, affiliations and associations.

For our Chipotle example, we want to create analytic profiles for the following business entities:

  • Customers
  • Managers
  • Stores
  • Products
  • Suppliers
  • Local Events

The insights that are captured in the Analytic Profiles are created by applying data science (predictive and prescriptive analytics) against the organization’s data sources in uncover behaviors, propensities, tendencies, affinities, usage trends and patterns, interests, passions, affiliations and associations at the level of the individual entity (individual humans, individual products, individual devices). See Figure 2 for an example of a Chipotle Customer Analytic Profile.

Next, we map the Analytic Profiles to the different Business Use Cases to determine which analytic profiles are most applicable for which use cases. The analytic profiles (Customers, Stores, Managers) created for an initial use case can be used – and subsequently strengthened or improved – as those analytic profiles are applied across multiple Business Use Cases (see Figure 3).

Classroom reference materials:

Step 3: Identify and Prioritize Data Sources
The final step in the Big Data value creation framework is to identify, prioritize and aggregate the supporting data in a data lake.  Since not all data is of equal value, a prioritization process needs to be used to ensure that the most important data sources (where importance is defined as the importance of that data source to support the top priority use cases) are loaded into the data lake first.

Figure 4 shows the results of an envisioning process to ascertain which data sources are most valuable to which Chipotle use cases.

Finally, we leverage the use case value estimation work from Table 2 to determine the potential value of each of the data sources in support of Chipotle’s “Increase Same Store Sales” business initiative (see Figure 5).

As we can see in Figure 5, we can create a rough order estimate for each of the data sources in support of each individual use case. We outlined the estimation process in the “Determining the Economic Value of Data” blog.

Classroom reference materials:

Finally, let’s pull all three of these Big Data IP assets – data, analytic profiles and business use cases – together in a single graphic that highlights the critical role that data science plays in coupling data with analytic algorithms to create the analytic profiles that support the organization’s top priority use cases (see Figure 6).

While this process may not be perfect, it does force the business and IT stakeholders to good cubeengage in a very critical conversation:  where are the sources of intellectual capital (and competitive advantage) in the modern Big Data organization?

In the end, we want to align the data, analytic profiles and business use cases in a manner that supports the organization’s key business initiatives. Hopefully, this process of aligning the three Big Data intellectual capital components is easier than actually trying to align the colored cubes of the Rubik’s cube (and it’s no fair peeling off the colored stickers and pasting them onto the cubes in the right order!).

The post The Big Data Intellectual Capital Rubik’s Cube appeared first on InFocus.

Read the original blog entry...

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

IoT & Smart Cities Stories
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science" is responsible for guiding the technology strategy within Hitachi Vantara for IoT and Analytics. Bill brings a balanced business-technology approach that focuses on business outcomes to drive data, analytics and technology decisions that underpin an organization's digital transformation strategy.
DXWorldEXPO LLC, the producer of the world's most influential technology conferences and trade shows has announced the 22nd International CloudEXPO | DXWorldEXPO "Early Bird Registration" is now open. Register for Full Conference "Gold Pass" ▸ Here (Expo Hall ▸ Here)
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
@DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time t...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determ...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.