Robot Overlords: AI At Facebook, Amazon, Disney And Digital Transformation At GE, DBS, BNY Mellon – Forbes

A robot overlord adjusts Rob’s Open Source Android (ROSAL) at the London Science Museum (AP Photo/Alastair Grant)

Searching recently for evidence of artificial intelligence taking over our jobs, lives, and everything else, Tom Davenport came up empty. “Nary a robot overlord to be found,” he declared at the 14th  MIT Sloan CIO Symposium. A day later, at the 3rd  RE•WORK Deep Learning Summit, I found many humans who are successfully lording over robots, telling their artificial intelligence creations how to perform a number of narrow cognitive tasks.

Davenport, who has published eighteen books on topics ranging from managing organizational data and processes to leading with AI and analytics, is working on his next one, tentatively titled The Cognitive Company. Throughout his distinguished career in a field littered with “hype cycles,” Davenport has opted to respond with healthy skepticism to the typical breathless and enthusiastic pronouncements regarding the latest new new thing. This time around, his let’s-look-at-reality approach is aimed at tempering the hype around the negative implications of artificial intelligence.

Surveying 160 AI-related projects performed by consulting firm Deloitte (where he is a Senior Advisor), Davenport couldn’t find even one job that was lost because of the adoption of AI (but also no sign of increased economic productivity). What he did find in the real world of companies adopting the latest advances in AI, were 3 types of “cognitive projects”:

  • Robotics and cognitive automation (44% of the projects he surveyed)—performing digital task transferring (e.g., replacing lost credit or ATM cards without human intervention).
  • Cognitive insights (48%)—delivering granular insights, detecting patterns (e.g., identifying credit/claims fraud in real-time at banks and insurance companies).
  • Cognitive engagement (8%)—interacting with customers and employees (e.g., virtual digital assistants for customer service).

The fastest growing AI applications today are those that are most prosaic, observed Davenport. With the two that he mentioned, Robotic Process Automation and using machine learning for data integration, it is probably easier than with other AI applications to quantify or demonstrate the benefits to the business.

Be prosaic rather than too ambitious, be slow in becoming a cognitive company, Davenport recommended: Take small steps but have a larger goal, including moving to a platform and developing an ecosystem; pick the right technology for the problem; plan for augmentation (assisting humans in their work), not automation; and build on your existing analytics capabilities and staff—these are the people best suited for an AI skills-upgrade.

Davenport hedged his bets by promising that AI will be “transformative in the long run.” His colleagues at the MIT Initiative on the Digital Economy, Erik Brynjolfsson and Andrew McAfee, argue that it is transformative now. In their best-selling The Second Machine Age, they have “underestimated the scale of destruction” of jobs, said Brynjolfsson at the MIT CIO Symposium. Their new book, Machine, Platform, Crowd: Harnessing Our Digital Future, is about the second phase of the second machine age, about moving from codifying knowledge to allowing machines to learn on their own. This shift has “massive implications for how we run our companies and live our lives” and we are “underestimating what’s ahead of us,” said McAfee, adding that with “machine learning and interconnected humanity put together, we are in new territory.”

New? Color me skeptical, in a much deeper shade than Davenport’s. The term “machine learning” was coined in 1959 (and the better-sounding “artificial intelligence” in 1955) to denote machines that learn on their own. And talk about interconnected humanity has preceded computers by a few hundred years. As for destruction of jobs, physicist Richard Feynman organized in 1944 a contest between human “computers” (the women who for the previous 60 years performed complex scientific and military calculations by hand) and IBM’s punched-card machines. Just like chess and Go champions, human computers lost that contest. With the advent of digital computers (or “giant brains” as they were called at the time) a few years later, the job of a human computer was eventually destroyed, with some of the women performing that role becoming the first computer programmers, a completely new occupation and a much bigger job creator.

If you don’t subscribe to Mark Twain’s dictum that history sometime rhymes and instead hold to the widespread notion that everything changed with the introduction of the smartphone (or the Internet or whatever other recent technological development) and the invention of “deep learning” (the new “artificial intelligence”), how about the following present and future numbers from McKinsey:

Comments

Write a Reply or Comment:

Your email address will not be published.*

Categories

  • Mobile