The term the “Fourth Industrial Revolution” was coined by Charles Schwabe while speaking at the World Economic Forum in 2016, a term which was quickly adopted by the world. The defining feature was identified as the deployment of “cyber-physical” systems in both life and work. The fourth industrial revolution and cyber-physical systems brings the disruption into a significantly more intimate and personal domain, building on the computer era of the Third Industrial Revolution. It is important to recognise that this does not imply a science fiction cybernetic morphing of technology and humanity. The implication is rather that physical things are increasingly powered by an underlying cyber or digital element.
Cyber-reality is built on algorithms, which are essentially formulas and lines of code that tell the cybernetic machine to expect data in a certain format and what to do with it. These algorithms can be surprisingly simple, or so complex that they exist within a practical “black box” functioning at a level beyond human understanding or review.
The Black Box nature of these algorithms was a factor when, in 2017, Facebook created a neural network of Artificial Intelligence (AI) applications - and shut it down after only four weeks. Two of the AIs, casually named Bob and Alice, were observed communicating with each other. That was not unusual because the network was set up for this purpose, but Bob and Alice started to spontaneously create their own language. Worringly, the engineers behind the programme could not understand what Bob and Alice were saying. They also did not understand the underlying algorithms that caused this behavior. The whole programme sat within a “black box” developing without the specific intervention of human beings.
History and context
Highly complex black box algorithms are the confusing end of a relatively simple process we have been using for millennia. Algorithms have been used to understand and simplify life, as well as technology innovation and development throughout history.
Imagine you were a neanderthal in the days before the discovery of the wheel in a world of algorithms. Your value was based on algorithms that informed your usefulness in being able to get as much of the food as possible back home from a hunt or forage before it spoiled.
1 dead Mammoth x distance from cave = number of people to get food home
Overnight, the discovery of the wheel changed that, and it was a new algorithm that was used to determine the impact of the change.
(1 dead Mammoth x distance from cave) / number of wheels = smaller number of people to get food home
Essentially, this would be an algorithm used to review the human value proposition in a changed world.
This dynamic was repeated with the domestication of animals; then followed by the harnessing of the power of steam, electricity and the computer. Humans have had a value proposition in each era, that was broken down into an algorithm and ultimately replaced by technology.
As a consequence, at each point of transition as human beings, we were forced to ask: “If this becomes pervasive, why will I be needed?” It is a question of needing to redefining the value we add to society when what we do is no longer needed to be done by us; or could be done better by an algorithm powered machine than we ever could.
The theory behind the algorithms
Simplistically, algorithms rely on a few basic elements.
- A clean functional set of data or information that functions as an input
- A repeatable action that can be represented by a mathematical formula or lines of computer code
- An output that is delivered more efficiently, correctly, or on greater scale than humans can deliver it
- Eventually, the linking together of many of these processes to create a significant enough shift that it moves society into another socio-economic-industrial era
Often, the evolution of these algorithms requires some peripheral human input in order to build sufficiently clear data to power the shift. We have all participated in this process without even being aware of it. When we access a new service on the internet, we are often asked to prove that we are human. Filling in a word or number sequence in the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) field. Sometimes we are given a series of images and asked to identify traffic lights or stop signs or pedestrian crossings. This process is to clean up the data being used by algorithms in various Google or Fourth Industrial Revolution technologies.
The numbers and letters we are required to fill out are used to clean up the data captured by Google Street view vehicles. These are cars driving around with cameras on their roofs capturing the images that give us those amazing views. These cars caught street names, building names, signs, and numbers that meant nothing to the AI stitching it all together and linking it to existing maps. Similarly, the photos where we identify traffic lights and crosswalks are used to assist Waymo, Google’s self-driving car technology, in recognising them.
The AI needs the data to make sense and it needs massive volumes of it to be useful. Once it has sufficient information to create a benchmark, it then compares new images against these to determine what it is seeing and how to respond. Human intervention was needed at the outset to teach the AI algorithm what it needed to execute the repeatable action associated with approaching a stop sign, cross walk, or traffic light.
However, self-driving cars and trucks already work, and we use street view and other mapping tools all the time – so why do we still get images and words in CAPTCHA? Simply, we are being used to make sure the AI gets it right. The images, words, and numbers we are fed now are from a pool where the AI is not certain or feels ambiguous in its determination. We humans can still add value in the shades of grey within the data. Every time we hit “submit” on the CAPTCHA tool we are making the AI better, and in time the images will become more difficult to identify. We will then pass a point where the AI identifies them better than we do, and the images we are fed in CAPTCHA will be for the AI to ensure that it functions at a level that exceeds human ability (it will be looking for where we get it wrong not right). This is the ultimate end point for all cyber-physical algorithms… learn from us, then exceed us, in order to support us and make our world a better place to live and work.
The limits of AI and the continuing value of humans
Algorithms add value by automating tasks that are functionally “black and white”. In this world humans still have value in helping it understand how to navigate the grey.
The Achilles heel of most current AI is that it can only function effectively within the very narrow band for which it was specifically developed. Anything outside these rather narrow parameters and the AI does not work.
What makes the Fourth Industrial Revolution interesting is that it is driven by the reality that within the narrow bounds for which a specific AI is developed, it is increasingly true that AI outperforms humans. The AI cannot pivot and switch into something else like a human can, but from a task-driven and outputs-oriented perspective, AI is superior to humans.
Why do Waymo and Tesla only do self-driving vehicles with four wheels? Self-driving motorcycles are being developed by BMW and others. The technology developed for self-driving cars cannot be picked up and used for self-riding motorcycles it is a completely different product and process for development – the algorithms only work for the narrow context they were written for. A human being can however drive a car, climb out of the car and ride a motorcycle (or skateboard, or bicycle) – or even go for a swim or a run.
Many of us have seen the incredible feats achieved by Boston Dynamics autonomous robots Spot, Atlas, Cheetah, and others. Yet even these amazing autonomous robotic technologies have none of the most basic human cross-functional flexibility. Within their area of development they out-perform humans but move them only slightly adjacently and they function worse than a toddler.
AI teaching AI and the disintermediation of human value
In these shades of grey, we find the labels that define our current human value proposition. We are told that we are still needed for things like creativity, flexibility, intuition, etc. As AI algorithms develop these are the areas that will begin to erode.
Creativity and Innovation are currently being influenced by a field of Artificial Intelligence called Graduate Adverserial Networks (GANs). In GANs two AI’s are pitted against each other with each taking the other’s output and improving it. They are put into a feedback loop with each trying to fool the other into thinking that the output has not been created by an AI.
The end product of this process is a level of creativity similar to that of a person but delivered in a fraction of the time. Companies like Proctor & Gamble are using GANs to fast-track some of their product development. P&G used GANs for some design changes to diapers they were developing. The GANs developed and tested a range of images until a selection was available for final consideration by the product development team. The process took a fraction of the time a creative agency or team would have taken. The internet scourge of Deepfake videos and images that are indistinguishable from “real” also come from this technology. AI in competitive loops with each other manipulates images in the public domain to make politicians and celebrities say things that they never actually said. In the Democratic primaries for the 2020 US Presidential elections there was real concern about the use of various deepfake technologies. Demo videos were made showing Barak Obama, Pete Buttigieg, Elizabeth Warren, and others in situations that were not real. No evidence of this happening in the elections came to light in 2020 – mainly because the deepfakes were still recognisable. Even the best ones couldn’t pass what AI researchers call “The Uncanny Valley” – where everything looks right, but our gut tells us it isn’t. But…in time, the feedback loop of this development process will overcome these issues and we will not be able to distinguish real from fake.
Even love and passion are not exempt. Tinder, Bumble, OK Cupid and other online dating apps have cyber-charged the dating game. In 1962, David Gale and Lloyd Shapley created an algorithm that solved the “Stable Marriage Problem”. Put an equal number of single men and women into a room - in 1962 the experiment relied on a heteronormative understanding of marriage - and using their formula you could come up with the best combinations for happy and stable marriages. They were the precursor to what online dating apps do today.
Humans tend to fall in love in ways that are similar and repeatable. Tinder has a ranking system called ELO that is used to present individuals with profiles of people who are most likely to match. Tinder’s ELO works on assuming that you will respond positively to the profiles of people who were selected by others who are most like you. These companies have turned this into formulae and algorithms that increase our chances of finding love and connection when we use their platforms. Falling in love has been reduced to AI processing your age, location, brief bio and seeing where it overlaps with those like you (of both gender).
Is there an algorithm for unlocking human value today and tomorrow?
The heart of our human value proposition is to ask this question of ourselves and our lives, “What cannot be turned into an algorithm?”; or “What is there in my world that will be difficult or too expensive to make into an algorithm?” The answer to this question is the start of your journey redefining of your value.
Firstly, we must realise that what we are experiencing today is nothing new. It has been the essence of human social evolution since we first figured out how to use our opposable thumbs. Once you settle this anxiety you are able to spot the opportunities to unlock your human value proposition.
Secondly, start to scan the AI and algorithm driven disruptions of your world. But, now look past the disruption and try identify the shades of “grey” that exist between them. It is in these scary “Here be dragons” shadows on the edges of our comfort maps that your human value lies.
Then ask how you can turn this insight into something others will pay for and see as adding value to their humanity.
For Business Leaders
A lot of what we took as “standard” business practice is being turned into algorithms.
The first responsibility you have is to ensure that your organisation is not left behind. What do you need to shift and change to make sure that you are keeping up with the most pervasive technology shifts in your industry?
Keeping up only means you don’t lose out it doesn’t unlock competitive advantage.
So, your second responsibility is to look at the potential that can be unlocked when several of these systems, programmes, and applications begin to merge and work together. If you can be the architect of this merging you are able to create value for your human clients that they would not otherwise see
As a doctor, your medical knowledge can be turned into an algorithm, however your bedside manner cannot.
As a lawyer, your knowledge of the law can be turned into an algorithm, however your ability to connect with and move a judge or jury cannot.
As a business leader, your ability to run numbers and even manage staff can be turned into an algorithm, however, your ability to spot an opportunity for a client and build deep commercial relationships cannot.
In simple terms then, all of the above are meaningful demonstrations that, in a world that it is increasingly computerised, human value still counts for a lot.