We humans and our societies in the 21st century are increasingly seeing technological developments emerge that we find engaging, convenient, or useful, but which we are often essentially unable or unwilling to understand. This lack of understanding of technology is not a new phenomenon. In fact, German philosopher Martin Heidegger noted as early as in 1927 that the advent of the radio both broadened and disrupted his everyday reality. Heidegger also wrote that the consequences of the emergence of new technology in the form of radio were completely impossible for him to fathom.
Also for us, people of the 21st century, new and networked everyday devices such as cars, television sets, washing machines, MRI scanners, wind turbines and even lampposts are all manifestations of new combinations of hardware, algorithms, software and data that are having an impact on our world that is barely graspable. These new combinations, which are also known as cyber-physical systems, are able to autonomously interconnect themselves in networks, communicate in these networks, and interact with other and similar combinations. The data and information that are autonomously produced and communicated by these systems are rapidly changing our everyday world, as well as the existing economic system, from the inside. The new combinations of hardware and software bring a form of what economist Joseph Schumpeter called ‘creative destruction’. Cyber-physical systems are gradually replacing existing devices and simultaneously developing a process of creative destruction of our world and our economy, without us being able to properly monitor and/or understand this process.
Internet of Things
Slowly but surely, it is becoming common practice to use voice commands to operate devices such as a smartphone, a TV, or a Tesla. Without thinking twice about it, we use and pay for content from providers such as Netflix and HBO that is produced in the United States and shown on our networked smart TV. We watch the content wherever we want, whenever we want, and on any device we want, while telling our friends that we don’t really watch TV any more. We talk to Siri, Google Assistant, or Alexa, getting our device to order things for us or take care of mundane tasks such as switching the lights on and off. It has long ceased to seem alien to us to get suggestions on our smartphone about the energy generated by our solar panels. All these new capabilities are created by the communication and interaction between devices enabled by the algorithms, software, and data that are available specifically to these devices. It led the US National Institute of Standards and Technology (NIST) to state in March 2019 that “the phrases ‘cyber-physical systems’, or ‘CPS’, and ‘Internet of Things’, or ‘IoT’, have distinct origins but overlapping definitions, with both referring to trends in integrating digital capabilities, including network connectivity and computational capability, with physical devices and systems”. The increasing connections, communication, and interaction between new combinations are converting, unchallenged, our day-to-day reality into a more and more interconnected and complex whole of data and information. One hundred years ago, grasping how an individual and stand-alone device such as a radio works was highly complicated for humans. Today, learning to understand how interconnected individual cyber-physical systems work is virtually impossible for us humans. Our analysis of the individual device should no longer revolve around the individual device itself, but rather around its connections to other devices, as these connections are what enables new functionality. Existing methods, ways of thinking, and forms of organisation, regulation, or governance no longer seem adequate in light of the rapid increase in the number and use of interconnected cyber-physical systems and their growing autonomy and intelligence.
Slowly but surely, we are entering a phase where new possibilities arise for collaboration and decision-making by these interconnected cyber-physical systems. In October 2018, NIST stated in a research report on blockchain technology that “the core ideas behind blockchain technology emerged in the late 1980s and early 1990s. In 1989, Leslie Lamport developed the Paxos protocol, and in 1990 submitted the paper ‘The Part-Time Parliament’ to ACM Transactions on Computer Systems. The paper describes a consensus model for reaching agreement on a result in a network of computers where the computers or network itself may be unreliable”. In a previous blog entry (February 2018), I referred to a collaboration project of Samsung and IBM in this area. This pilot project, called Autonomous Decentralized Peer-to-Peer Telemetry (ADEPT), was focused on the possibilities for collaboration between a specific cyber-physical system, in this case a washing machine, and multiple other devices in a specific and permissioned environment. Back in 2018, I wrote the following about this project: “The ADEPT project has led to a pilot of a blockchain of devices, where devices work together autonomously and make decisions about tasks or orders, etcetera. The approach of linking these devices using blockchain technology also further increases these devices’ level of autonomy.” Parts of the algorithms and software used in the project were later used by IBM as a basis in their development of Hyperledger blockchain technology. The pilot run by Samsung and IBM shows that the possibilities offered by blockchain technology can also be harnessed for reliable communication, consensus and decision-making, as well as for autonomously performed information transactions by and between autonomous cyber-physical systems. On the latter possibility, the Industrial Internet of Things Consortium stated the following: “Entities need to share information; they also need to keep it private. Distributed ledger technologies, such as blockchain, can be used as authentication providers. This means that more data can be shared because the provider has more confidence that the shared data will be restricted to the preselected groups. This could be used to provide attestation of edge elements and software, and track the provenance and completeness of the critical edge-hosted data” (2019:7). In Europe, there is also ongoing research into the possibilities for reliable information exchange between devices. In 2019, the European Blockchain Laboratory concluded the following: “Blockchain could be connected to new production trends or the ‘fourth industrial revolution’, which include other emerging technologies, from IoT to artificial intelligence and robotics, and new materials or additive manufacturing” (2019:29). Whether we like it or not, complexity will inevitably increase as more and more interconnected cyber-physical systems become able to autonomously and jointly make decisions on our behalf through an incalculable number of connections and based on algorithms, software, and data.
In 1950, Alan Turing asked himself the following question: ‘Can Machines Think?’ Five years later, a group of American scientists wrote a proposal for a study that would answer this question about what they called ‘Artificial Intelligence’ within two months. In their proposal, they stated “that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (1955:1). Over the 65 years that followed, artificial intelligence has developed with a great many highs and lows. In essence, the question is still how machines as combinations of hardware (computers), algorithms, and programs (software) can learn from the data made available to such a combination. The foundations of the learning are still under discussion. Today, the key question is whether the owner of a new combination (such as Google, IBM, Amazon, Facebook, Apple) is able to increase the computing power of the technology (such as Tensor Processing Units) and continue to combine the capabilities created by such an increase in computing power with new and improved algorithms and software, so as to make the technology even better at ‘learning’ from analyses from even greater volumes of data. There is increasing discussion worldwide on whether this form of algorithm-based learning could ever match humans’ ability to learn. In this discussion, Russell stated the following in 2019: “The problem is right there in the basic definition of AI. We say that machines are intelligent to the extent that their actions can be expected to achieve their objectives, but we have no reliable way to make sure that their objectives are the same as our objectives” (2019:11). Machine learning in any form is a dimension of learning that differs from what we humans define as learning. The learning done by machines that are interconnected in networks and make decisions within these networks involves learning from the value associated with these decisions to be able to subsequently make ‘better’ decisions. The step to machine learning by interconnected cyber-physical systems is, therefore, not as major as is often thought. The PAXOS algorithm, for example, includes that nodes in a network must learn from the value ensuing from a joint decision-making process between the nodes. In an ever more complex world of interconnected cyber-physical systems, these systems are not only able to autonomously and independently make decisions based on algorithms, software, and data, they can at the same time also learn from the value used to also autonomously adjust and improve the decision-making process. These capabilities will lead to these new combinations drastically changing our lives and work over the coming years, and thus have a far-reaching impact on us humans.
This last statement takes us right back to the beginning. The current process of innovation creates new technological combinations and makes our world increasingly complex and harder to grasp for many. Having ideas and knowledge in the traditional way, or turning a blind eye to the way in which new technology creates possibilities that seem engaging, convenient, or useful, is impossible without new knowledge to help us make sense of this development. Like Heidegger said, we, as humans, need to ‘relearn to think’ about the question of what the essence is of the new technology. This way of thinking will enable us to find new ways to understand technology and the ensuing possibilities and consequences for humans and society. And above all, ‘learning to think’ can help us understand what this technology means for us humans
- Heidegger, M. (1927) Being and Time. Dutch Edition (1986) Zijn en Tijd. Nijmegen, Uitgeverij SUN (1998). ISBN 9063037945.
- Schumpeter, J. A. (1943/2003) Capitalism, Socialism & Democracy. New York, Routledge. ISBN 0203202058.
- Greer, C., Burns, M., Wollman, D. and Griffor, E. (2019) Cyber-Physical Systems and Internet of Things, NIST Special Publication 1900-202. https://doi.org/10.6028/NIST.SP.1900-202
- Yaga, D., Mell, P., Roby, N. and Scarfone, K. (2018) Blockchain Technology Overview, NIST 2019, NISTIR 8202 Blockchain Technology Overview https://doi.org/10.6028/NIST.IR.8202
- Lier, B. van (2018) Blog entry - Blockchain between edge and fog computing. February 2018.
- Industrial Internet Consortium (2019) The Edge Computing Advantage. An Industrial Internet Consortium White Paper, Version 1.0. 2019/10/24.
- European Commission. Joint Research Centre. (2019) Blockchain Now and Tomorrow. Assessing Multidimensional Impacts of Distributed Ledger Technologies. ISBN 9789276089773.
- Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49: 433-460 .
- McCarthy, J., Minsky, M. L., Rochester, N., Shannon, C. E. (1955) A proposal for the Dartmouth Summer Research Project on Artificial Intelligence.
- Russell, S. (2019) Human Compatible. Artificial Intelligence and the Problem of Control. New York, VIKING. ISBN 9780525558613.
- Heidegger, M. (1954/2004) What is Called Thinking? Harper Perennial, New York. ISBN 006090528.