AI explained

All you need to know about artificial intelligence (AI)

Artificial intelligence media headlines, AI partnerships, AI in SCM, lethal autonomous weapon systems, technological singularity, cognitive computing, weak and strong AI, and more.

Markus Meissner 07.09.2017

Professor Dr. Kille and myself were invited to a very special interview just some weeks ago. As initiators of the “Logistikweisen” (panel of logistics experts), we were tasked with casting a glance at the development of logistics far ahead in the future – 50+ years.

And I’d like to discuss one of the key aspects of the future of logistics here. One that made new media headlines thanks to the public clash between Elon Musk (Tesla, SpaceX) and Mark Zuckerberg (Facebook) in July: Artificial intelligence (AI).

Praised as the next big thing in logistics, AI is expected to change supply chain management – with foreseeable, disruptive impacts. But what are the risks that accompany us on this path, and how should we interpret the latest statements – and actions – by Musk?

Starting point of the discussion is the assumption that comprehensive process automation and machine autonomization can only be fully achieved with artificial intelligence. As such, it truly forms a key technology of the future overall.

Basically, all major technology corporations are engaging on the topic of AI – in parts even in a joint approach: Google/DeepMind, Facebook, Amazon, IBM, and Microsoft founded an alliance in September 2016 to combine forces in the research efforts – Apple joined in 2017.

This partnership was established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society. It’s said that it also aims to develop standards and create a positive, public perception.

Business field application: AI in supply chain management

AI capabilities enable us to analyze even unstructured data, identify influencing factors for relevant decisions, and anticipate future scenarios based on pattern recognition. The report “A World shaped by predictive analytics” by Sopra Steria and Marketforce includes some everyday examples and an interesting, public survey. There is major potential in intelligent automation technologies and digital assistants.

So which tangible benefits can we derive from AI for the supply chain then? As part of both digitization and digitalization, major volumes of data are generated along supply chains. We know that analyzing such masses of data and extracting relevant information extends far beyond human capabilities.

AI creates new possibilities for entirely new approaches and business models in this area. It’s clear already today how these possibilities will rapidly change the planning and control of the flow of goods in supply chains: full automation, optimal use of resources, intelligent and predictive management of uncertainties and exceptional circumstances.

AI in logistics
AI in logistics

Professor Hokey Min of the Bowling Green State University already delivered tangible details in his publication “Artificial intelligence in supply chain management: Theory and applications” of 2010.

Connectivity, data, and algorithms form the framework for completely new methods in demand planning, inventory management, or transport control – for numerous start-ups. There is much venture capital investment on global level – with promising return on investment potential.

The McKinsey discussion paper “Artificial Intelligence – The Next Digital Frontier” published in June explains how AI can boost profits and transform industries. Elon Musk also invests significantly in this area – supporting the research lab OpenAI. Great investments come with great expectations for tangible progress, which makes the discussions around AI even more relevant for us.

The dark side potential: lethal autonomous weapon systems

Elon Musk is also emphasizing the dangers involved in AI: he repeatedly called for actions to implement pro-active regulations – again just recently at the US National Governors Association 2017 Summer Meeting. Giving artificial intelligence the means and measures to control things, Musk highlights, also bears the risk that such measures could be used against humans – if respective incentives are offered.

Elon Musk
Elon Musk

And just a few weeks back, Musk has upped the ante: in a joint effort with more than one hundred businessmen and researchers, he warned against the military use of artificial intelligence in an open letter. The signatories call on the United Nations to ban lethal autonomous weapons on a global level. They refer to the threat of “the third revolution in warfare”, the great danger of misuse, and armed conflicts getting out of hand.

This concern is shared by no less brilliant mind than the late British astrophysicist Stephen Hawking. He’s been one of the greatest critics of AI and has long been warning against incalculable consequences of research in this area. In an interview with the Financial Times as far back as December 2014, Hawking said that “computers double their speed and memory capacity every 18 months. The risk is that they develop intelligence and take over. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded”.

Musk and Hawking are also among the endorsers of an open letter that was published by numerous AI and robotics researchers in 2015. It also urgently warned against using AI for autonomous weapon systems: “AI technology has reached a point where the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Tipping point in human history: technological singularity

Should we dare to bet on technologies that are equipped to wipe out humanity? Do we need to take special security measures, develop more comprehensive ethical principles, or even submit to defined self-restraint for ongoing research and initiatives?

Something is needed, that’s for sure. When even an American entrepreneur and billionaire calls for restricting business freedoms, we should prick up our ears – it’s clearly a red flag. Of course, the recent media echo and number of discussions exploding on the internet were an intended reaction.

And when – following his clash with Zuckerberg – Musk said: “Mark’s understanding of the subject is limited”, he got me wondering: Does he maybe mean all of us?

Next milestone in AI
Next milestone in AI

An important keyword when evaluating risks and opportunities of AI is technological singularity. This futurology term describes the point in time from where machines can start improving themselves through AI. Beyond this point, progress in AI research and its consequences for humanity are hardly predictable.

In 1993, the mathematician Vernor Finge predicted in his much-quoted article “The Coming Technological Singularity” that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

In his 2005 book “The singularity is near”, Ray Kurzweil, AI-expert and director of engineering at Google, estimates that technological singularity will be achieved by the year 2045. He believes that by 2029, computers will pass the Turing test. This means we would not be able to distinguish a computer from human intelligence in conversations.

This scenario could indeed be the tipping point in human evolution. But where do we really stand today – and how realistic are the predictions? Determining this isn’t as easy a task as it seems, considering that there is no precise interpretation of artificial intelligence yet.

Common misconceptions: intelligence vs. algorithms and simulations

Wikipedia, for example, clearly states that “the scope of AI is disputed”. And the economic encyclopedia from Gabler describes artificial intelligence as “methods that allow a computer to solve such tasks, which, when solved by humans, require intelligence”.

But what are the criteria to classify “intelligence”? Kevin Kelly, the founding executive editor of Wired magazine, published the book “The Inevitable”, forecasting the twelve technological forces that will shape the next thirty years. He really hit a nerve when he said that AI will help people to better understand what is meant with the term “intelligence”.

You can watch Kelly explain more about it here:

In the past, we assumed that only super intelligent AI can drive cars, beat humans in “Jeopardy”, or recognize a billion faces. But when computers accomplished each of these tasks in recent years, we concluded such performance is obviously machine-driven and does not quite deserve the label “real intelligence”.

What to date is generally referred to as AI, is rather a simulation of intelligent behavior based on predetermined or learned patterns. In addition, we have what’s known as knowledge-based systems. Using a knowledge base, these systems try to solve complex problems. Other systems, in turn, apply probability calculation methods to adequately respond to given patterns.

This is why the terms AI and algorithm are sometimes used synonymously – without necessarily representing the same thing. An algorithm is a rule expressed in computer language that consists of a sequence of instructions. It’s often the case that an AI is made up of algorithms – but an algorithm is by no means always an AI.

Cognitive computing, neural networks, natural language processing

Current implementations of artificial intelligence come from the areas of cognitive computing, neural networks, or natural language processing. Cognitive computing refers to the simulation of human thought processes in a computer model.

In real-life application, this involves self-learning IT systems that can communicate with people and other computer systems in real-time, can remember previous interactions, and can draw conclusions independently. They consider their environment and process large amounts of data from the most diverse sources at high speed. A well-known application in this area is IBM’s cognitive system Watson.

The human brain, on the other hand, serves as inspiration and template to simulate artificial neural networks in computer systems.

Mathematically, neural networks are based on the principles of matrix calculations – and they learn by trial and error. This means the system adjusts the weighing of connections and activation thresholds of nodes until the results match the initial input. Taking many adjustment rounds enables these networks to learn to connect inputs correctly with outputs.

Cognitive networks
Cognitive networks

Today, neural networks solve real-life problems in the most diverse fields of application. Google, for example, deploys one for its AI system DeepMind and combines it with machine learning (ML) methods.

ML algorithms support humans in recognizing patterns in existing data pools, making predictions, or classifying data. Based on these patterns, mathematical models enable us to gain new insights. Application highlights include image recognition, speech recognition, and speech processing.

Natural language processing (NLP) combines these skills to deliver solutions that recognize both spoken and written language, analyze it, and extract the meaning to enable further data processing. AI’s ultimate objective here is to create the most extensive communication possible between humans and machines by means of language.

Weak AI and strong AI – and chatbots that got out of control

An important aspect of classifying artificial intelligence is the differentiation between weak (or narrow) AI and strong (or true) AI. While weak AI usually deals with real-life problems and application, strong AI pursues general intelligence – similar to or exceeding human intelligence.

Expert systems, navigation systems, image and speech recognition – all these, and more, are generally classed as weak AI. And despite initial optimism, the vision of a strong AI has been deemed an unattainable goal in the near future – following sobering up after decades of research.

So basically, an AI is always a specialist to date. And current developments continue to focus on tangible areas of application and isolated solutions to problems.

This is also what Mark Zuckerberg pointed out in his response to Elon Musk, highlighting that AI will be responsible for lifesaving services such as diagnosing diseases and driving cars.

“I have pretty strong opinions on this. I am optimistic,” he said. “And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways, I actually think it is pretty irresponsible.”

Further fueling the Zuckerman-Musk dispute was an incident in the Facebook lab that occurred almost at the same time: two chatbots designed to negotiate with people developed an “own language” while chatting to each other, and had to be turned off.

The doomsday mood factor on the internet quickly peaked following this news. But closer analysis showed it was rather a failed experiment than the dawn of the apocalypse.

Danger or no danger, AI will change our place of work

Commissioned by Stanford University, more than 20 experts published the One Hundred Year Study on Artificial Intelligence (AI100) last year. It is a long-term investigation of the field of AI and its influences on people, their communities, and society.

In its executive summary, it says that “contrary to the more fantastic predictions for AI in the popular press, the study panel found no cause for concern that AI is an imminent threat to humankind.”

Chatbots are only the beginning.
Chatbots are only the beginning.

But the researchers also agree that AI will significantly change our world of work. With human labor augmented or replaced, many people will not be able to earn their living as before. And here also, the question arises how new technologies can improve whole societies and make economic benefits available to all.

“We need more constructive debates around [the] future of AI”, tweeted Kai-Fu Lee in response to the erupted discussions in July. Lee is a Taiwanese venture capitalist, technology executive, and computer scientist, who made his fortune through technology investments. He also refers to the loss of jobs as a much higher and more real risk than the invention of an all-encompassing super-intelligence.

“As we push AI science forward, it will be critical to address the influences of AI on people and society, on short- and long-term scales”, said Eric Horvitz, Technical Fellow & Director of Microsoft Research, in his recent article in Science magazine.

What may seem like a bad dream for employees today is a future reality that researchers take very seriously:

  • Why would we need a call center when chatbots can deliver all required answers?
  • Why should people work in warehouses when robots can move, pack, and dispatch goods
  • Why should drivers control vehicles when machines can do so safer and without rest?

Trust in safety and reliability vs. looming risks and criminal intent

The Stanford report refers to the area of transportation as the first domain in which the general public will be asked to trust in the reliability and safety of an AI system.

“Trust, but verify” – or as the German saying goes “Trust is good, control is better”. For me, it’s not about trusting the reliability of an autonomous vehicle. When it comes to that, I am very optimistic and predominantly see the major benefits with regards to quality of life and use of resources.

It’s rather about the misuse of technology to benefit small interest groups or to harm humanity. What control options do we effectively possess? Recent media headlines on North Korea demonstrate how useless restrictions on the proliferation of weapons of mass destructions are when just one country steps out of line.

Making AI safe
Making AI safe

And yes, entirely new weapon systems will be developed based on AI. In fact, many companies have already started out to do so. It would be naïve to believe that AI deployment can be stopped in any one area. Still, such misuse cannot take place without a conscious decision – or criminal acts – by individuals or governments.

I certainly do not argue against bans on lethal autonomous weapons. Such bans are of vital importance, of course. Despite concerns about effectiveness and the fact that the reality of applicable scenarios may be quite far ahead in the future.

Why hope prevails and what we may look forward to

But in the meantime, we should focus on the many opportunities that AI presents. This technology has the potential to drive great changes – positive ones, for the better of humanity. In fact, I hope that AI developments in conjunction with other technologies will deliver such major advances that the appeal and incentives for misuse will be minimized as a result.

More debates and discussions on the topic are certainly needed. Standards and rules will be indispensable – especially in areas like consumer protection. And effective controls for the dark potential of this technology can – in my view – only exist when the global community agrees on a law, holds it up, implements, and enforces it.

It’s a truly fascinating topic – yet daunting for many – that I will bring to a close here for today. We will follow the developments and report about them here in SCMALLWORLD – make sure to sign up to stay in the loop.

What are your views on all of this? Where do you stand – and in which areas do you expect developments to affect your organization? I look forward to your views on LinkedIn.

Over de auteur
Markus Meissner
Markus Meissner is Managing Director bij AEB en werkt sinds 1995 bij het bedrijf. In zijn redactionele stukken deelt hij zijn vele jaren aan ervaring in supply chain managementstrategieën en technologie-trends voor de logistiek.

Similar content

Deze website maakt gebruik van cookies. Door de website te gebruiken, stemt u in met het gebruik van cookies  Informatie over gegevensbescherming.