Centric connect.engage.succeed

Blockchain and the Autonomy of Systems

Geschreven door Ben van Lier - 25 april 2017

Ben van Lier
The Oxford English Dictionary defines “autonomy” as “the right or condition of self-government or having its own laws”. The autonomy of an individual system can be determined based on the extent to which an individual system is capable of self-government. When an individual system is connected to other systems in a temporary or permanent whole with joint decision-making capability, the autonomy of such a system of systems can be considered to be determined by rules and laws that apply specifically to this whole and its constituent parts, and which are captured in algorithms. The shift of decision making from humans to interconnected machines raises numerous questions, such as, according to Van Lier and Hardjono [1], about the “necessary trust between participants in such networks”.


One example of a system of systems with its own laws and rules is the global bitcoin ecosystem. This ecosystem is partly based on Nakamoto’s [2] idea that a new global payment system is needed and must be based on cryptography instead of on trust between people and organisations. By basing information transactions within the payment system on cryptography, you can, according to Nakamoto, enable “any two willing parties to transact directly with each other without the need for a trusted third party”. In the bitcoin ecosystem, the use of cryptography and other algorithm-based rules replaces the trust between people and organisations with trust in the functioning of the system of systems and its information transactions.

Lamport’s PAXOS algorithm [3] is, as he himself explained, a procedure where decision making is based on consensus and getting a “majority of legislators” to approve proposed laws and rules needed for the functioning of a system of systems. Problems that may arise in the decision-making process about such laws and rules are, according to Lamport, comparable to those that can arise in fault-tolerant decision making in distributed (computer) systems. In Lamport’s view, decision-making procedures executed at any nation’s parliament are similar to decision-making processes between different individual and interconnected (computer) systems that have to run a task jointly as a system of systems. Both Nakamoto and Lamport have claimed that trust between people and between people and organisations can be replaced by trust in the functioning of algorithms that enable interconnected systems to operate autonomously and reliably and communicate with each other, and hence allow these systems to make consensus-based decisions on information transactions that have to be executed. When it comes to decision making by interconnected systems, it is, like with a parliament, important to know on which assumptions or choices (included in the algorithms used) the decision-making process is based.


Feenberg [4] defined such a form of autonomy as operational autonomy, i.e.: “the power to make strategic choices among alternative rationalizations without regard for externalities, customary practice, workers preferences, or the impact of decisions on their households”. To Feenberg, what is particularly important to consider for this technology-based form of decision making is what ideology, rules, or algorithms underlie the rules, code, or algorithms needed for such decision making. In the cases of bitcoin and PAXOS, for example, the underlying basis is the ambition to create a better global payment system based on interconnected systems and cryptography (bitcoin) and the ambition to create an environment where interconnected autonomous systems can autonomously make consensus-based decisions (PAXOS). According to Barber and Martin [5], increasing trust in the autonomy of interconnected systems is determined by “the degree to which the decision-making process, used to determine how that goal should be pursued, is free from intervention by any other agent”. In their view, autonomy within a system of systems is a given, and individual systems in any manifestation are therefore self-governing. Autonomy of interconnected systems is, according to them, concentrated around active use of shared capabilities for decision making in realising a specific objective, without other systems being able to influence this. A report published by the US Defense Science Board [6] stated that the autonomy of systems should be considered a result of the delegation of decision making to an autonomous entity to enable this entity to independently execute a task within predefined boundaries. According to the authors of this report, to be autonomous “a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation”. Wallach [7] sees the increasing autonomy and independent decision-making capabilities of interconnected systems as a threat to the fundamental given that humans are responsible and accountable for possible damage caused by this form of technology. According to Allen and Wallach [8], the current generation of software that interconnected systems use for decision-making procedures is not yet sufficiently developed in an ethical sense, meaning that these systems are insufficiently able to include and process an explicit representation of moral thought in their decision making. This latter point means, according to Gunkel [9], that the development of autonomous and interconnected systems must also look at the need for ethics and moral actions by these systems, which led him to state that “deploying various forms of machine intelligence and autonomous decision making in the real world without some kind of ethical restraint or moral assurances is both risky and potentially dangerous for human beings”.

Decision making

Today’s blockchain technology hype is focused largely on the possibilities and opportunities that this new form of technology seems to offer. We are readily willing to, in our thinking, swap trusted third parties, as created by humans, for interconnected autonomous technological systems. These interconnected systems can, so we think, jointly make decisions, enter into contracts, and perform information transactions based on their own laws and rules as captured in algorithms for these systems. As humans, we trust the growing autonomy with which systems of systems are able to make decisions based on man-made algorithms and software, and subsequently perform a range of information transactions based on these decisions. We are, on the other hand, not interested in the assumptions and choices made by humans in making the algorithms and software that enable the autonomy and decision making of these systems of systems. Our trust in the functioning of these systems of systems is, therefore, not based on our knowledge of the laws and rules underlying the decision making by these systems. Do we, however, not owe it to ourselves to also ask with respect to this readily accepted shift of responsibility from humans to technology, just like Hannah Arendt [10] did, “what is the nature of the sovereignty of such an entity?” Should we not focus more on the assumptions and choices that went into the algorithms and software that enable the decision making by these systems of systems and shape the information transactions performed by these systems based on these algorithms and software? Does this hype not ultimately raise the question whether this lack of interest in the essence of this kind of technology could also lead to outcomes that are less positive or different from our current expectations?

  • [1] Lier, B. van and Hardjono, T. (2011) A Systems Theoretical Approach to Interoperability of Information. Systems Practice and Action Research, 24, pp. 479-497
  • [2] Nakamoto, S. (2008) Bitcoin: A peer-to-peer electronic cash system
  • [3] Lamport, L. (1998) The Part-Time Parliament. ACM Transactions on Computer Science, vol 16 no 2 pp. 133-169. May 1998
  • [4] Feenberg, A. (2002) Transforming Technology. A Critical Theory Revisited. Oxford University Press ISBN 0195146158
  • [5] Barber, K. S. and Martin, C. E. (1999) Agent Autonomy: Specification, Measurement, and Dynamic Adjustment. In: Proceedings of the Autonomy Control Workshop at Autonomous Agents. (Agents ’99), pp. 8-15, May 1, 1999 Seattle.
  • [6] Report of the Defense Science Board Summer Study on Autonomy June 2016 Office of the under Secretary of Defense for Acquisition, Technology and Logistics, Washington D.C.
  • [7] Wallach, W. (2015) A Dangerous Master. How to Keep Technology from Slipping Beyond Our Control. Basic Books. ISBN 9780465058624
  • [8] Allen, C. and Wallach, W. (2014) Moral Machines: Contradiction in Terms, or Abdication of Human Responsibility? In: Robot Ethics. The Ethical and Social Implications of Robotics. Edited by Patrick Lin, Keith Abney, and George A. Bekey. Pp. 55-68 MIT Press Paperback 9780262526005
  • [9] Gunkel, D. J. (2012) The Machine Question. Critical Perspectives on AI, Robots, and Ethics. The MIT Press Cambridge, Massachusetts. ISBN 9780262017435
  • [10] Ahrendt, H. (Edition 2006) Eichmann in Jerusalem. A Report on the Banality of Evil. Penguin Classics. ISBN 9780143039884

Ben van Lier works at Centric as Director Strategy & Innovation and, in that function, is involved in research and analysis of developments in the areas of overlap between organisation and technology within the various market segments.


Schrijf een reactie
  • Captcha image