Welcome!

@ThingsExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: @DXWorldExpo, @CloudExpo, @ThingsExpo

@DXWorldExpo: Blog Feed Post

Citizen Data Scientist, Jumbo Shrimp | @CloudExpo @Schmarzo #BigData

Okay, let me get this out there: I find the term 'Citizen Data Scientist' confusing

Citizen Data Scientist, Jumbo Shrimp, and Other Descriptions That Make No Sense

Okay, let me get this out there: I find the term “Citizen Data Scientist” confusing. Gartner defines a “citizen data scientist as “a person who creates or generates models that leverage predictive or prescriptive analytics but whose primary job function is outside of the field of statistics and analytics.”

While we teach business users to “think like a data scientist” in their ability to identify those variables and metrics that might be better predictors of performance, I do not expect that the business stakeholders are going to be able to create and generate analytic models. I do not believe, nor do I expect, that the business stakeholders are going to be proficient enough with tools like SAS or R or Python or Mahout or MADlib to 1) create or generate the models, and then 2) be proficient enough to be able to interpret the t-tests, f-scores, p-values and residuals necessary to ascertain the analytic model’s goodness of time.

No one would say “Citizen Lawyer” or “Citizen Nuclear Physicists” or “Citizen Physician.” I guess a “Citizen Physician” would be someone who “practices medicine but whose primary job function is outside of the field of medicine (meaning that they’ve had no training in medicine or medical procedures).” They call those people quacks (not quants…he-he-he).

WebMD doesn’t make someone a doctor any more than analytics makes someone a data scientist. Analysis of the analytic results and insights is an important step in the process, particularly when the results contradict each other. Data scientists provide the necessary experience about the different analytic techniques and algorithms required to decipher the results, validate the results and then turn the results into actions or recommendations.

What’s wrong with the definition is that it doesn’t properly acknowledge the deep training in analytic disciplines such as machine learning, cognitive computing, data mining, computer programming, and applied mathematics. It also dismisses the critical importance of gaining hands-on, data science experience through years of apprenticeships and tutelage under the guidance of master data scientists.

In order to understand the importance of the role of the data scientist, I solicited the help of the best data scientist that I know …Wei Lin. Wei and I have done numerous big data projects together and every time I engage with Wei, I learn tons. So naturally, I’d call upon a true master data scientist to help me write this blog.

Data Scientist Capabilities Are a Good Starting Point…
The starting point for the data scientist discussion starts with an understanding of the types of tasks at which a data scientist must become proficient. Below is a summary of these tasks. I think you can quickly see that an effective data scientist requires a wide and deep range of capabilities including:

  • Data acquisition. The data scientist is going to pull data from a wide variety of sources in a wide variety of formats. Some of the data will be accessible as tables using SQL. However, much of the data will be in log files and will be extracted using tools such as R and Python to grab the raw log files. Some of the data will be pulled from websites, in which case one can either use the provided API’s (if there are API’s) or they screen scrape the data. A wide variety of expertise across a wide variety of tools is required to acquire the data from whatever the source may be – structured (tables, csv), semi-structured (log files) and unstructured (text files, documents, images, video files).
  • Data preparation. The data scientist needs to go through a process of cleaning up the data (especially if screen scraping was used), normalizing, aligning, and enriching (adding new variables such as frequency, recency, monetary and indices) the data. There is a common sense component required during this process to ensure that one is aligning like levels of granularity and is comparing like entities. Tools used here include SQL, R, Python and Java.
  • Data exploration/data visualization. The data scientist then starts to explore the data looking for outliers and visual correlations in the data. This is where an inquisitive mind is useful as the data scientist digs deeper and deeper into the granular data and looks for opportunities to link other data sources. Missing values may be discovered in the exploration phase, in which case the data scientist needs to decide how to handle the missing values. Tools used here include Tableau, Spotfire and R (ggplot2).
  • Model development. This is where the data scientist starts to quantify cause and effect by actually building predictive models. Quantifying correlations coefficients, statistical errors and residuals using tools such as SAS, R, Python, MADlib, and Mahout is required to ascertain if the model being built is more predictive or not.
  • Model validation. The data scientist then needs to determine the model “goodness of fit” using measures such as F-test, t-tests and p-values. The tools that were used to build the model (SAS, R, Python, MADlib, Mahout) provide the goodness of fit metrics.
  • Results visualization: Once the data scientist has a model with which they are confident that is “good enough” given the problem that they are trying to address (see my blog “Understanding Type I and Type II Errors”), then the data scientist needs to use many of the same data visualization tools (Tableau, Spotfire, ggplot2) to determine the optimal way to present the results so that the users can understand the results in order to act on the analytic results.

But The Key Is the Experience
Understanding algorithms is different from deciphering the results and translating the knowledge into business actions or client treatment. Going back to our WebMD example, a person who reads WebMD will have challenges trying to match their symptoms to wide variety of potential diseases and illnesses (except for the easy, more frequent illnesses), and to properly prescribing the “right” mix of medications, treatments and therapy.

Data scientist often frames a question into its business value and data context. It makes question more readable. Those questions could go in several different levels so rather than asking it all in one, the question itself could be break down into smaller business questions. There are methods to further reduce complexity by dimension reduction, variable decomposition or principle component analysis, etc.

There are many analytic algorithm and modeling options. Choosing a proper algorithm could be a challenge. The alternatives are to run large number of algorithms to search. With that, large number of results will need to be analyzed.

Interpreting results is a complex task. By running a large number of algorithms, the results tend to partial converge or partial conflicting. The conflict resolution and the weights of the variables require further modeling or ensemble.

Data Science Requires More Than Smart
But it isn’t just the analytics capabilities, skills, training, apprenticeship and hands-on experience that make an outstanding data scientist. Our best data scientists also exhibit outstanding “bed side manners” or humility. They understand the power of humility that immediately puts others at ease, allowing for a more open and more inclusive conversation.  To me, this is the real key to being an effective data scientist, where I define “effective” to mean “comes up with reasonable recommendations that the users can understand and take action on.” The best data scientists quickly learn that in order to deliver outstanding outcomes, they need to be able to engage, listen and learn from others of all types.

But I could argue that humility is the key to success no matter your profession. Whether becoming a physician, or a nuclear physicist, or a lawyer or a barista or a teacher/coach, humility is imperative for continued growth and mastery of your craft.

As we like to say during our Big Data Vision Workshop engagements, all ideas are worthy of consideration. Because the minute you think you know all the answers, is the time when you are no longer relevant to the conversation.

To quote the “Lego Movie”

“A special ‘Master Builder’ will defeat Lord Business and become the greatest ‘Master Builder’ of all. The key to true master building is to believe in yourself and follow your own set of instructions inside your head.”

Sounds like a Master Data Scientist to me (especially when said in Morgan Freeman’s voice)!

The post Citizen Data Scientist, Jumbo Shrimp, and Other Descriptions That Make No Sense appeared first on InFocus Blog | Dell EMC Services.

More Stories By William Schmarzo

Bill Schmarzo, author of “Big Data: Understanding How Data Powers Big Business” and “Big Data MBA: Driving Business Strategies with Data Science”, is responsible for setting strategy and defining the Big Data service offerings for Hitachi Vantara as CTO, IoT and Analytics.

Previously, as a CTO within Dell EMC’s 2,000+ person consulting organization, he works with organizations to identify where and how to start their big data journeys. He’s written white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power an organization’s key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the “Big Data MBA” course. Bill also just completed a research paper on “Determining The Economic Value of Data”. Onalytica recently ranked Bill as #4 Big Data Influencer worldwide.

Bill has over three decades of experience in data warehousing, BI and analytics. Bill authored the Vision Workshop methodology that links an organization’s strategic business initiatives with their supporting data and analytic requirements. Bill serves on the City of San Jose’s Technology Innovation Board, and on the faculties of The Data Warehouse Institute and Strata.

Previously, Bill was vice president of Analytics at Yahoo where he was responsible for the development of Yahoo’s Advertiser and Website analytics products, including the delivery of “actionable insights” through a holistic user experience. Before that, Bill oversaw the Analytic Applications business unit at Business Objects, including the development, marketing and sales of their industry-defining analytic applications.

Bill holds a Masters Business Administration from University of Iowa and a Bachelor of Science degree in Mathematics, Computer Science and Business Administration from Coe College.

IoT & Smart Cities Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...