@ThingsExpo Authors: Zakia Bouachraoui, Yeshim Deniz, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @ThingsExpo, @CloudExpo, @DXWorldExpo

@ThingsExpo: Blog Feed Post

AlphaGo vs. You: Not a Fair Fight By @ShellyPalmer | @ThingsExpo #AI #IoT

AI is a powerful tool. With this breakthrough, we’re getting close to the dividing line between raw computing power & cognition

What made move 37 so interesting is that no one expected it. It was early in game two of the million-dollar Google DeepMind Challenge Match, and AlphaGo, an artificial intelligence (AI) system developed by Google, placed its 19th stone on a part of the game board that no human Go master would have considered. Some called it a “mistake.” Others called it “creative” and “unique.” But considering that AlphaGo went on to win its third game in a row against one of the strongest Go players in the world, the move should probably have been called what it really was: “intuitive.”

Note: as of March 13, 2016 AlphaGo lead its best of five match against 9-dan Go master Lee Sedol three games to one.

Turing Would Love This
In 1959 Arthur Samuel began to teach a computer to play checkers, thinking that it was a good model for rudimentary problem solving. He defined machine learning as “a field of study that gives computers the ability to learn without being explicitly programmed.”

Back then, Samuel’s definition of the verb “to learn” was operational, not cognitive. But that subtlety is usually lost in translation. People always argue about whether or not computers can think. It’s the wrong argument. Paraphrasing from Alan Turing’s famous paper, “Computing Machinery and Intelligence,” let’s not ask the question, “Can machines think?” Let’s ask, “Can machines perform the way we (who can think) do?” (For more, see Can Machines Really Learn?)

The Challenge
Which brings us to the current challenge. We’ve seen computers beat humans at several contests of “human” intellect. Back in 1997, IBM’s Deep Blue supercomputer beat world chess champion Gary Kasparov in a very public match. In 2011, IBM’s Watson AI system beat Brad Rutter and Ken Jennings on the television game show Jeopardy! But the game of Go is different. A chess player may have to contemplate 20 to 35 moves per turn. A Go player is faced with 10 times that number. Numerically, the possible board combinations in an average 150-move game are vast (on the order of 10170). Google says that is greater than the number of atoms in the universe. (I’m not sure how Google calculated the number of atoms in the universe, but I agree 10170 is a very, very large number.)

It is the exceptionally large number of possible moves that sets Go apart from other gameplay-based demonstrations of AI. Aside from logic, the world’s best human Go players win by using a combination of strategy, instinct and intuition. Go has so many moves, a computer cannot win by calculating all of the them – it must learn to “perform the way we (who can think) do.”

Back to Game 2, Move 37
How did AlphaGo “decide” to make this unusual move? AlphaGo lead project manager David Silver said that AlphaGo’s policy network has a model of what humans would do in this situation. After evaluating the high-probability moves, it starts to consider less probable moves, thinking ahead and considering potential futures. When asked if AlphaGo has a “human bias,” Silver went on to say, “AlphaGo will explore the human probability moves more thoroughly; this is its bias and it uses this to guide it toward its initial estimate. We train our neural networks on human data, so that does provide a bias, but that bias is a guide for the search. … It can always overwhelm that bias by searching more deeply and analyze things in an introspective way.”

In other words, AlphaGo doesn’t play by trying every combination (it can’t; there are too many possible moves). AlphaGo thinks, tries stuff, plays by feel and learns from its mistakes — it’s “thinking” more like us than any machine has thought before.

AlphaGo vs. You
AlphaGo has demonstrated a huge leap forward in AI and machine learning. The speed at which this two-year-old team evolved this system is truly awe-inspiring. Where does it lead? Well … if you can teach AlphaGo to be almost unbeatable (by a human) at Go, imagine what else you might be able to teach AlphaGo to do.

If you analyze reports for a living, move numbers from one cell in Excel to another, play “What if,” project manage or evaluate productivity in almost any way, a system with AlphaGo’s capabilities is going to learn how to do your job. It will be better at it than you could ever be, which leads to only one logical conclusion: your job function will become a computer function – a couple of clicks on a screen, and AlphaGo will do the rest.

I Think for a Living!
Yes, you do! But so does AlphaGo, and soon a purpose-built version will be able to do almost every low-level, most mid-level and some high-level white-collar jobs. Importantly, this type of AI will always outperform its human competition. Of course AlphaGo can lose, underperform or make a subjectively or objectively “bad” decision. But the future is clear — no white-collar job is safe. Not yours, not mine, not anyone’s.

This kind of AI can read, write, recognize natural language, recognize pictures, pattern match, simulate, optimize – in fact the only good news is that no one has any idea how to transfer neural network capabilities between disciplines. AlphaGo is dangerous to 9-dan Go masters, but harmless to people who optimize media purchases. But AlphaMedia (hypothetically) would always out-optimize them. That said, according to DeepMind founder Demis Hassabis, Google’s goal is to develop a generalized AI system – a system that could build on its knowledge and apply its learning to anything. This is an awesome goal, as in, it should fill you with awe!

Thrilled and Scared
I am thrilled by the success of the AlphaGo team and I am absolutely humbled by the power of what they have created. And it really, really scares me. Not because I don’t understand it, but because I do.

AI is a powerful tool. With this breakthrough, we are getting close to the dividing line between raw computing power and cognition – between craft and creativity – between machine and human. Ray Kurzweil has predicted a “Singularity” (where men and machines merge) for quite some time – and, as predicted by Kurzweil’s often-quoted Law of Accelerating Returns, we are closer to it than ever.

Thinking machines will have the capacity to heal the sick, feed the hungry and help us predict and survive natural disasters. They will make amazing lawyers, accountants, doctors, researchers, managers, writers and hundreds of other kinds of workers. They will also make exceptional productivity partners for work, entertainment and the doing of life. Like all technologies, they will ultimately be used more creatively than we can currently imagine.

But here I have to urge caution. Thinking machines will also learn to fight, they will learn to create computer viruses unlike any the world has ever known, they will level the playing field between good guys and bad guys in ways no one can really predict and they will impose symmetry on warfare that is currently asymmetrical – which is what scares me the most.

Congratulations to Google, DeepMind and the incredible team of engineers, scientists and coders who have just changed the world. Alan Turing and Arthur Samuel would be proud.

The post AlphaGo vs. You: Not a Fair Fight originally appeared here on Shelly Palmer

Read the original blog entry...

More Stories By Shelly Palmer

Shelly Palmer is the host of Fox Television’s "Shelly Palmer Digital Living" television show about living and working in a digital world. He is Fox 5′s (WNYW-TV New York) Tech Expert and the host of United Stations Radio Network’s, MediaBytes, a daily syndicated radio report that features insightful commentary and a unique insiders take on the biggest stories in technology, media, and entertainment.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...