Algorithms are currently already performing a plethora of complex tasks for us. But will they one day also make real decisions for us?
Armed forces are already investing billions in artificial intelligence (AI) which can identify targets and fly aircraft independently. The Australian government has its outgoing welfare payments checked by software and reclaimed if necessary. And entrepreneurs around the world are asking themselves which tasks within their organizations could be taken over by artificial intelligence in the future.
And the answer is that many of them can be. On the one hand, algorithms are capable of processing incredible amounts of data. On the other hand, they can constantly improve themselves by tirelessly collecting empirical values, analyzing examples and refining their evaluation patterns accordingly.
However, they still cannot make real decisions. A defining feature of decisions, according to our definition, lies in their contingency: the possibility to choose between several options.
Human decision-makers, for example, can freely involve factors that are not actually intended to be part of the decision-making process. For example, an insurance clerk could spontaneously have the settlement of a claim also depend on how long the policy holder has been a paying customer – a factor which is completely irrelevant for the assessment of the case itself, but which can be quite significant for the company.
Artificial intelligence, however, does not make autonomous decisions, but merely follows the more or less broad paths that programmers have set up for it. But even the most prudent software designer can never foresee, consider and program in the multitude of environmental factors that affect organizations, and which may have to be answered by deviating from the rules. From our point of view, this is one of the crucial restraints of artificial intelligence.
Another thing: What an AI system learns naturally depends on what training data it is fed with and which data it automatically searches for. But the choice of learning material for a neural network is just as subjective as a person’s reasons for choosing a particular design of eyeglasses: both lead to certain things being perceived in a certain way, while other things are overlooked, even if they might be relevant. This basic premise does not change if an AI system is programmed to look for additional learning material by itself.
But what would happen if artificial intelligence were to one day develop itself in such an intelligent way, that it veered off course from its originally intended algorithm? What if it became so clever, that it outgrew itself (and us) as a super-brain and altered the original intentions of the programmers of its own initiative?
Firstly, from a current perspective, there is no sign that this rather clichéd vision could one day indeed become reality. Everything an AI system does ultimately goes back to a program written by humans. However, from the very beginning it has become clear how problematic this so-called deep learning can be. This is because an AI brain, which sifts through and evaluates vast amounts of data, is a black box whose learning paths we can only trace with great effort. And this is cause for concern, as a research group at the Berlin-based Heinrich Hertz Institute (HHI) has explained. The researchers analyzed neural networks for image recognition and discovered some amazing things: For example, software that was designed to recognize photos of horses did not rely on image content when searching, but on copyright information that pointed to horse forums – not a completely stupid ‘decision’, but not exactly a smart one either. Another artificial intelligence that was supposed to identify trains was mainly oriented towards tracks and platform edges. The network did not consider locomotives and wagons themselves to be particularly important.
The bottom line is that artificial intelligence is amazing but sometimes also amazingly limited. However, if you let it go unchecked, missteps made by machines can quickly be followed by human missteps. In image recognition software, such errors may merely spark our curiosity, but in military, medicine or machine production they could be devastating.
Artificial intelligence opens up new resources of influence – for players who have not been in the game before
The upheaval triggered by digitization and computer-aided intelligence is enormous. However, it does not come about as a result of artificial intelligence gaining decision-making power that goes beyond what humans have programmed into it. Rather, it arises from the fact that digitization provides new resources of power and influence, and that these resources now lie at the disposal of players who have never had such power or have never even been involved in the game before. Meanwhile, others are losing out; in some cases dramatically. Amazon, for example, is massively decimating the retail segment and forcing manufacturers to make use of the Amazon distribution channel. This is shifting the balance of power on the markets. Occupational groups will be decimated if their work can be completely or even partially replaced by algorithms. For example, when algorithms instead of clerks now process settlement disclosure notices for insurance claims.
Even in error-free normal operation, the growing influence of algorithms inevitably alters the power balance in organizations and society. This happens indirectly by people aligning their actions and decisions with the presumed preferences of the algorithms (a trivial example of this is when web designers attempt to use search engine optimization so their content can be found more easily by the Google algorithm). And naturally all those who can program, train and analyze the algorithms (or have them analyzed) gain power as a direct consequence.
Put simply, all those who have artificial intelligence (and access to it) in the future will tend to have more power.
JUDITH MUSTER and THOMAS SCHNELLE
are Partners at Metaplan Germany.
Here you can find the posts already published on the topic ‘when data hits the organization’:
Sebastian Barnutz/Franz-Josef Tillmann:
‘More data = more rationality’ – we disagree!
Finn-Rasmus Bull/Judith Muster:
People overestimate the role of data in decision making situations
Johanna Meschede/Zeljko Branovic:
Informal loopholes don’t disappear when data comes into play