AI may be the dominant theme for humanity going forward. Here’s an introductory overview on the topic, how it impacts several of our business sectors now, and the hopes and fears for how it may shape our jobs, economy and existence in the future.
The episode, Monolith, from the final season of the television show, Mad Men, foreshadowed our present day. The scene taking place in 1969, had the anti-hero adman and creative genius, Don Draper, feeling blue with the installation of the enormous IBM System/360 in the former lounge used by his creative team. His firm believes the new computer system will help target specific customers and make better business decisions. Lloyd Hawley, the contractor in charge of the installation, sees a bright future for humankind’s greatest creation. Don only sees it as a threat to his personal significance.
“LLOYD: I go into businesses every day, and it’s been my experience that these machines can be a metaphor for whatever’s on people’s minds.DON: Because they’re afraid of computers?LLOYD: Yes. This machine is frightening to people, but it’s made by people.DON: And people aren’t frightening?LLOYD: It’s not that. It’s more of a cosmic disturbance. This machine is intimidating because it contains infinite quantities of information, and that’s threatening because human existence is finite. But isn’t it godlike that we’ve mastered the infinite? The IBM 360 can count more stars in a day than we can in a lifetime.DON: But what man laid on his back counting stars and thought of a number?LLOYD: (Smiling.) He probably thought about going to the Moon.
In an interesting twist, the recent Super Bowl ad for H&R Block featured Jon Hamm, the actor who played Don Draper in Mad Men, pitching IBM. H&R Block is partnering with IBM Watson (World Jeopardy Champion) to help customers prepare and file taxes more smoothly, with a side benefit of paying as little or getting as much back in taxes. H&R’s 70,000 human tax experts can identify deductions and credits, but primarily within their niche expertise for certain rules. Watson, on the other hand, can use natural language processing to read the 74,000 plus pages of federal tax code and use it to figure out all possible deductions and credits. In addition, reading and learning from millions of tax returns going forward will only make it smarter.
Before taking a closer look at the negative side of A.I., lets start with examples that may impact our getting to work and how we work. Some overview and history is also included.
This is just one example of how A.I. is helping us go beyond our capabilities, doing what is from our human perspective -impossible. As tax payers we couldn’t be happier, but as workers, without infinite capability, we are also feeling a little blue. H&R has stated that partnering with Watson will not have any impact on jobs and the human experts are still in the driver’s seat. Which I’m sure is true, but one can’t help but wonder and worry, when already doing the impossible, how hard can it be to do the possible.
Cars and A.I. (It’s All About the Ride)
Autonomous cars or driverless cars are perhaps the most popular topic in A.I. The automobile has shaped the 20th and 21st century in the way we live and where we live. It’s also shaped culture, politics and the world order. For business and investors, the market for transporting people and goods is not billions of dollars, but trillions of dollars. However, one cost attributed to that immense market is heartbreaking. In 2015, over 35,000 people were killed and over 4 million injured in automobile accidents in the United States. Ninety-four percent related to human error.
Current computer technology has already played a role in making human drivers safer. The vision is that A.I. will completely replace the human driver with its capability to process large amount of data from the vehicle, other vehicles and the surroundings to make decisions that will reduce the number of deaths and injuries towards zero. Many of us have been driving since we were 15 years old. We can drive on a highway with our minds focused on anything but the actual driving, sometimes even consciously forgetting we’re driving at all. Our brain, however, is actively engaged with the effort. And duplicating that processing power and algorithm through traditional rules-based programming, is impossible.
A rules-based program may have difficulty dealing with human drivers that don’t always follow rules, such as coming to a full stop at four way intersections, or never actually obeying the speed limit.
Google’s driverless car project, now a subsidiary called Waymo, has driven over 2 million miles to date in autonomous vehicles on public roads with human drivers ready to take over when needed. The human driver taking over, called disengagement, has gone down to a frequency of once every 5,000 miles of driving. In this respect, Waymo is far ahead of any other company pursuing driverless technology. Tesla’s Autopilot has gathered 220 million miles of data which is valuable, but only a tiny fraction of that is without human driver involvement.
Google’s technology, similar to most others in the field, uses a variety of sensors – LIDAR, RADAR, Cameras, GPS, and Odometry, to gather data on map position, car position and all surrounding information. Deep learning, along with A.I. intensive computing hardware, is the only way to harness all of this data, analyze, and provide actions to fully realize a driverless car. Although called testing, it is really learning that is going on from these 2 million miles and the millions more to come. There are still many challenges to learn and overcome, some that seem relatively easy for us humans – driving in rain, snow, or unmapped areas are just a few examples. Our brain is still many billions of miles ahead.
The Birth of AI
In August of 1955, two academics, John McCarthy and Marvin Minsky, with support from two senior scientists, Claude Shannon of Bell Labs and Nathan Rochester of IBM, proposed a conference for the summer of 1956.
McCarthy and Minsky were PhD’s in Mathematics. Rochester was an engineer at IBM designing the IBM 701 system which was the first mass produced, general purpose computer. Claude Shannon was one of, if not the most, important pioneers in computers, communications and the digital world. Their vision (quoted below) was stated in the opening paragraph of their proposal for the conference held at Dartmouth College. The term, Artificial Intelligence, coined by McCarthy for the conference that is considered as the birthplace of AI.
“We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
Initially, the post conference research yielded some very positive results. Small proof of concept projects were impressive. IBM was funding further research on the corporate side. However, going past the proof of concept to solve real world problems became very difficult. IBM’s shareholders felt funding “thinking machines” was frivolous, and their marketing sensed a general fear from the public about artificial intelligence, so they moved onto other more practical research. In the 70’s, 80’s and 90’s AI went through a series of hype, failure and reduced funding cycles now referred to as the “AI Winter”. The leaders of the original conference did not give up. Their efforts through the years have advanced AI to be ready when time and society are ready. That time seems to have been the 21st century, as processing power has caught up with the Mathematics, making AI Summer a permanent reality. Being able to capture immense amounts of data, advancing the math and algorithms to turn the data into information and action, are the primary reasons for AI getting off the whiteboard.
Manufacturing and AI (From Flow of Products to Flow of Data)
Automation has been a part of manufacturing for some time. This video link shows how much it has advanced for the BMW plant in Spartanburg, S.C. However, automation is not the same as AI. The automation as shown in the video relies on robotics and traditional rule-based programs that work well for highly repetitive and predictable tasks. GE wants to be one of the leaders in helping themselves and other companies to the next stage of introducing AI into manufacturing.
Manufacturing is transforming from one where the focus was on the flow of products to one that focuses on the flow of data. This merging of the physical and the digital is at the heart of what manufacturing experts call Smart Manufacturing, Smart Factory, Industry 4.0, Internet of Everything and few others. An article by Delloite provides the graphic (shown below) of the technologies that will be playing a role in the interplay between the physical and digital world.
50 billion devices are expected to be connected to the digital world. GE has been building it’s software and information technology resources with the goal of becoming one of the major software companies in the world and the leader in this merge between the physical and digital worlds. They have divested their financial related assets to focus on the industrial sector, where they have been leaders for over 100 years. So naturally, they will be a leader in the Smart Factory, Industry 4.0 and IoT, because they don’t have to go far to get customers to build out their industrial internet platform they call Predix.
Predix is GE’s platform and software they have been building internally and now provide as a service for other companies to harness and gain insight from all things data. The vision for the future factory for GE from an article in the Washington Postis described below.
‘ The ideal intelligent factory, or, as GE calls it, Brilliant Factory, looks something like this. Software, such as GE’s Production Execution Supervisor, captures order data from customers. If, say, an aircraft maker is running low on fuel nozzles, the software can automatically order production on new units. If an entirely new part is needed, the order is dispatched to design teams. Using 3D printing technologies, they can design, prototype and test a new part in hours instead of days or weeks. The part then goes into production on a line that’s mostly ‘staffed’ by highly intelligent robots. Each stage is monitored by sensors that feed data to AI and analytics software, such as GE’s OEE Performance Analyzer, that reside in the cloud. If a defect is spotted, or a new part needs to enter production, the software orders the part and the process begins anew.
Artificial Intelligence will play a large role in making this vision a reality. The immense amount of potential structured and unstructured data from marketing, sales, design, manufacturing, logistics, delivery through after market support, is near infinite. All of it can impact various parts of this chain. Analyzing this vast amount of data, gaining insight, making decisions and taking action, is impossible for humans and extremely challenging for rule- based programming, but very possible for AI. As a result, GE has recently acquired two startups specializing in using machine learning (subset of AI) to find patterns and answers in large amounts of data. GE wants to compete with IBM Watson and feels their domain expertise in the industrial and manufacturing sector will make them the leader.
Machine Learning and Deep Learning
Deep learning startup, DeepMind, was purchased by Google a few years ago for $400 million when they had 75 employees. They were really buying the skills of those employees. The top minds in deep learning are getting first year NFL Quarterback salaries. Nvidia is a leader in GPU (graphical processing unit) which is at the heart of the hardware needed for machine and deep learning. Their stock was the best performer in the S&P 500 last year going from $ 25 to $ 119 by the end of 2016. Why?
Machine Learning is a subset of AI and currently the driving force in analytics to make sense of large sets of structured and unstructured data to drive insight and action. That’s pretty much what driverless cars, smart factory, and many other applications of AI have in common. IBM develperWorks website has a post and graphic (shown below) that highlights how universal the application of AI to analyze data can be.
What is happening? Why did it happen? What is likely to happen? What should I do about it? These four questions may describe what we do from the time we wake up to when we go to sleep. Perhaps even after we go to sleep. Driving a car, determining when maintenance on a machine should occur, what music or movie to recommend, which prospect is the best one to pursue, predicting the weather, how to detect and cure cancer and how to pay less taxes, are all applications where AI via machine learning can be applied.
Recognizing objects is a good example of how machine learning can be applied. When a machine needs to read hand written text, trying to program the solution is very difficult. There are so many variations. Instead, providing a large amount of examples and the correct answers allow machine learning algorithms to learn on their own. Then the algorithms can interpret new examples where they don’t have the answers. These types of learning algorithms are also used by driverless cars to detect and label surrounding objects to determine appropriate actions while driving.
Deep Learning is a branch of machine learning that uses artificial neural networks that take multiple inputs through many layers of neurons to determine a final output. Trying to figure out some of the technical details on deep learning is difficult. There really isn’t a good couple of sentences that can really define it for the non AI/math set. Our brain uses a similar neural network-like structure for processing. This has been very valuable approach for image and speech recognition problems. It relies on huge amounts of data and very powerful processing power such as GPU processors, rather than CPU. Deep learning includes some of the rock stars of AI such as Andrew Ng, Geoffrey Hinton, Demis Hassabis, Yann LeCun and Yoshua Bengio. All working in Academia, Google, Baidu and a few other tech giants.
RELATED: Building A Winning Organization Through Data-Driven Decision Making >>>
IT Services and AI (Listen To the Machines)
Machine data contains a record of all the activity and behavior of customers, users, transactions, applications, servers, networks and mobile devices. It includes configurations, data from APIs, message queues, change events, the output of diagnostic commands, call detail records and sensor data from industrial systems, and more. Splunk, a company started in 2003, is a leader in helping customers turn this massive amount of data into what they call Operational Intelligence. This intelligence can be used by IT organizations to improve maintenance, reduce downtime and increase performance, but also can be used by the rest of the business to improve customer satisfaction and grow revenue.
What started out as indexing and search capabilities is now turning to machine learning algorithms to makes sense of the machine data to drive insight. Splunk has said they have developed a machine learning platform leveraged by their software and cloud offerings that can be used by non-technical users. Some of the potential use cases as highlighted in a Computer World article by Ben Kepes are:
- Focused investigation: Identify and resolve IT and security incidents by automatically detecting anomalies and patterns in data.
- Intelligent alerting: Reduce alert fatigue by identifying normal patterns for specific sets of circumstances.
- Predictive actions: Anticipate and react to circumstances such as proactive maintenance that might otherwise disrupt operations or revenue.
- Business optimization: Forecast demand, manage inventory and react to changing conditions through analysis of historical data and models.
The same article provides some examples of how companies are using Splunk’s ML platform:
- Telus uses machine learning to monitor noise rise from more than 20,000 cell towers to increase service and device availability and mean time to repair (MTTR).
- Zillow uses custom outlier detection to find server pools that cause massive deviations in error 500s due to code and configuration changes.
- Kinney Group used the Splunk ML Toolkit for Schmidt Peterson Motorsports in the Indy 500. In conjunction with support from Splunk, Kinney Group monitored track conditions and car performance at the Indy500 and during qualifying. Real-time operational data analysis was conducted on all three SPM race cars during the event.
Now that we’ve covered some background information and three beneficial cases: transportation, manufacturing and IT services. It’s time to look at the potential negatives. We start with the far out threat, then overview the near term and more relevant threat.
Worrying About Overpopulation on Mars
Our fascination with someday creating machines with human level intelligence has been around for a long time in both science and science fiction.
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
–I.J. Good, 19651. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. – Handbook of Robotics, 56th Edition, 2058 A.D.
The first passage is written by I.J. Good, real life mathematician and WWII code breaker about the possibility or inevitability of an intelligence explosion. The second passage is from a 1942 short story by Isaac Asimov that originated his famous Three Laws of Robotics. Our fear of an existential threat from artificial intelligence predates AI in early science fiction, but even highly regarded scientists like Good felt AI could bring about unintended consequences. Unintended consequences from technology has always been the case, so dismissing fears of the dangers of AI without understanding would be a mistake.
Today some very highly regarded individuals such as Elon Musk, Bill Gates and Stephen Hawking have expressed their concerns about the possibility of AI leading to an existential threat to humans if we don’t begin to design measures to ensure our safety. Philosopher, Nick Bostrom and Physicist, Max Tegmark have written and formed think tanks to research the threat. None of these individuals are against AI, but believe our human intelligence will over time be able to create machines that have equal human level intelligence, and some time after that, machines will have super intelligence far greater than any human. Once we reach that level then what control will we have, is the question on the researchers minds.
Of course, none of these famous names are actually AI practitioners. Many of the leading names in AI feel we are so early in the game where machines haven’t even yet achieved human level stupidity, so it’s much too early to worry about dangers to our existence. One common refrain is to compare this concern to worrying about overpopulation on Mars when we are so far from sending the first human to the planet. A survey of these scientists predict that human level intelligence has a 50% chance of being achieved around 2050, and a 90% chance near the close of the 21st century. There are about 25% who believe we will never get there.
Worrying About Losing Our Jobs
The original meaning, from the 17th century, of the term “computer” was “one who computes”, a human performing mathematical calculations. During the Manhattan Project and World War II many individuals with Mathematical backgrounds, primarily women, were computers who helped advance the development of the atomic bomb and assisted in the code breaking efforts that helped win the war. Later in the 50’s and 60’s, as seen in the recent movie, Hidden Figures, mathematical calculations needed for aircraft and space travel was computed by human computers. The main theme of this movie was the difficulty and challenges for individuals to fulfill their potential when facing the social and institutional prejudice towards race and gender that was more pervasive at the time. But, it also showed the inevitable replacement of the human computers at NASA when the IBM 7090 became faster and more accurate then its human counterpart. This recurring theme of being replaced and losing significance is now even more relevant with the potential advances of AI.
A report on the impact of automation on work, by the McKinsey Global Institute (MGI), released in January, 2017, revealed that 51% of all work done globally can be automated with current technology. The headlines about the release focused on the finding that only a small percentage of occupations could be automated and the process would take decades to get to the 51% stage. Somewhat of a relief based on these headlines, but in reality, the report details were concerning. The types of work activities that MGI found could be automated involved collecting data, processing data and predictable physical activities. The total of those activities for the U.S. economy represented $ 2.7 Trillion of wages.
So on the surface, the concern over job loss seems to be much more relevant near term than the concern about the end of humanity. The report did not even factor in advancing technologies such as AI. The MGI report presents automation as a positive trend to increase productivity. The primary means for an economy to grow is through labor growth and productivity growth. There’s at least two ways to look at our potential future based on this equation.
From the half empty side. AI will increase productivity helping GDP to grow. However, it does so by replacing human workers who may not be able to find other jobs that have not been replaced by automation and AI. In this case, the labor force growth goes down. Capitalism is dependent on supply and demand. Automation and AI primarily impact supply, and almost all of demand is driven by us humans. In our current system, we create demand through purchasing power that we gain by trading for it through our labor. This is where things break down. What’s the point of productivity to produce more products and services in a world where fewer people have purchasing power. I’m not sure if anyone has yet figure out how to deal with this conundrum, or whether it breaks down early in the process since the incentive for automation and AI is linked to the aggregate demand in the economy.
From the half full side.
Demography is destiny is the common phrase. The developed and emerging world is on current trend with birth rates below, and in some cases, far below the replacement rate of 2.1 children per couple. The only exception has been the United States, due to immigration as the non immigrant birth rate is below the replacement rate. These are trends that will not be changing anytime soon. The result is an aging population and a shrinking of the workforce population. Both dire consequences for the economy where one component of growth is labor growth.
This is where the optimists see AI coming to the rescue by replacing the shrinking workforce with productivity growth in addition to providing technology and services to help make an aging population more independent and healthy. Just like the half empty side, this one is not black and white either. There is a growing young population in developing countries that will need jobs, there is still the re-balancing of skills as even a smaller workforce will not all have the non-automation/AI skills required, and, as we are already seeing, the resulting merging of differences in skills, race, culture, religion, education and beliefs that result from these imbalances can be messy.
Final Words
A trained Dermatologist can visually determine the difference between a harmless blemish and cancer. Scientists at Stanford fed a machine learning algorithm from Google over 127,000 images of skin legions from over 2,000 different diseases, all labeled with the correct diagnosis. Then they fed the AI 1,942 biopsy- proven images of lesions. The images were viewed by groups of board certified Dermatologists and the AI algorithm. The AI did at least as well as the doctors and better than most. The goal is to incorporate this AI feature into smart phones where individuals can use it for early detection. The vast majority of people do not have easy access to Dermatologists, so the hope is that AI could be a life saving tool for us humans.
One of the characters in the movie, Hidden Figures, Dorothy Vaughan (based on the real Dorothy Vaughan), taught herself FORTRAN, and eventually became a supervisor in a NASA programming group, recruiting some of the former human computers into the group as well. This type of skills transitions will be more difficult and fewer in the AI era, but the hope is that technology disruption always precedes new opportunities. These new opportunities will drive new job growth. The fear is that AI’s capability to learn the new opportunities in the same way as humans will mean we can’t rely on this positive result anymore.
AI can be split into two parts – Narrow and General. The current AI is in the narrow realm, where machine learning and deep learning can be applied to very specific problems such as detecting cancer, understanding speech or driving a vehicle. It’s used on the job to transform manufacturing, sales, marketing and logistics to generate more revenue and become highly efficient and productive. It’s the key to making sense of the data that surrounds us. General AI is the dream, where a machine can perform any intellectual task a human can. We are making very good progress in the narrow realm, but within general AI, we are just barely beginning if at all. AI practitioners believe we are still in the machine stupidity stage and have decades of progress before we go to any further stage. Of course, things can change very fast. It’s definitely a mistake to project the future of an exponential phenomenon-like technology such as AI by using our linear view of its past.
AI’s impact is both good and bad, just like Lloyd and Don viewed the computer. Many of the problems we have can be solved through a narrow approach improving our lives. On the other side, a good bit of our jobs are solving narrow problems, leading McKinsey Global Institute’s report to suggest much of our work like collecting data, processing data and predictable physical activities can be automated. There are very knowledgeable people on both the optimistic and pessimistic sides with equally strong arguments. The next decade may give us a better answer to whom may be right.
As far as the existential dangers go, the solution came from my nine year old daughter. We watched a documentary on the future prospects of Robotics and AI and it’s dangers to both our lives and significance. And, while I was quiet and pale at the end of watching it, my daughter just calmly looked up at me and states, “Don’t worry Daddy, we can just push the Off Button”.
About Allari
Allari helps I.T. leaders shine by leveraging proven IT-as-a-Service success models to create customized plans for I.T. Operations & Cybersecurity functions. Customers leverage Allari’s basic services to fulfill specific Roles, Work Areas, Knowledge Capture initiatives as well as reinforcing their core competencies such as Security Operations Center, Application Services, Help Desk Services and Software Product Sales allowing them to be more productive and cost efficient.Allari takes pride in helping these leaders take control of their destinies by providing them the space they need to build stronger relationships with their businesses. As a result, these leaders get the recognition they deserve which ultimately helps to make I.T. fun again!
The company provides services via offices in the United States, Ecuador, Brazil and India serving customers in over 55 countries. Visit www.allari.com.