The Intelligence of Organisations

How will our understanding of organisations change as they become more integrated with machines?

The ability to adapt requires an organisation to learn.  Organisations learn in response to changing environments through feedback loops that provide knowledge of the surrounding environment and allow for adjustment to the system.  Adaptation and learning through awareness of one’s environment is a primary signal of intelligence (Piaget, 1963) indicating that the behaviour of Organisations is intelligent. 

An understanding of the relational processes and system structures that facilitate this type of organisational intelligence is studied in the field of Organisational Learning (OL) that examines how knowledge is created and utilised to facilitate adaptive behaviour in response to changing contexts (Basten & Haamann, 2018).  The conceptual domain of OL is broad; however, one of the most cited areas is that of feedback loop learning proposed by Argyris and Schön (1978, 1996) and  Bateson (1972), which describes how organisational systems both learn, and learn to learn (Tosey, Visser, & Saunders, 2012). 

The ability to ‘learn to learn’ has been a driver for AI development since Turing (1950) proposed machines be designed to emulate the learning of a child.  Artificial Intelligence (AI) development has taken multiple routes to achieve this goal, including attempting to mimic our neuronal brain architecture, often resulting in astounding successes.  There are, however, still many hills to climb, such as ensuring that AI applications are not simply aimed at solving a specific problem, but that they enhance organisational-wide success.  Another challenge is the pressing social need to develop ethically responsible implementation frameworks of AI.  The fields of OL and AI are often seen as  disconnected areas, here we elaborate a vision for connecting them through building organisational learning networks that entwine human learning with machine learning.  We propose that by comparing OL and AI architectures, workers in the field may discover better ways to implement AI agents into our organisations; methods that are both ethically responsible and that most effectively enhance an organisation’s successful outcomes.

There are strong symmetries between the architectures of AI and OL, and both rose from the work of cyberneticians of the early twentieth century (Mirvis, 1996). Cybernetics is focused on the relationships and mechanisms of a system rather than the components (Bucher, 2018).  It provided foundational work for numerous fields from engineering applications such as AI (Novikov, 2016), to management applications such as OL (Tosey et al., 2012).  Despite the fields of AI and OL taking divergent routes, they retained a commonality of methods.  By looking at the relationships between agents (e.g., individuals, groups of people, or artificial neurons) and the structures that connect them (e.g., communication networks or artificial neural networks), both models attempt to enhance how a system learns, and learns to learn.  

Artificial neural networks function by individual nodes working in relationships to achieve system-wide learning (Basheer & Hajmeer, 2000); organisations function very similarly.  Organisations adapt at unit and sub-group level by receiving and outputting information to achieve the best outcomes as perceived by the agent that is actively learning or by determined environmental performance measures.  They deal with noisy inputs and form layered and distributed networks that produce system-wide outcomes, often needing to maintain agility in changeable markets.  Recently, AI research has been “interested in the more abstract properties of neural networks, such as their ability to perform distributed computation, to tolerate noisy inputs, and to learn” (Russell & Norvig, 2016, p. 739), our proposal meet this research interest.  Using knowledge of relational process correlated with effective OL provides another perspective on the problem than the mechanics of a single brain.

There are two possible advantages to exploring the similarities between OL and AI structures.  From a systems-based perspective, we can observe OL structures, note which facilitate more successful outcomes, and make more informed decisions as to where AI-based solutions would work best for the fitness of the whole organisation.  An holistic approach to understanding the organisational impact of adding AI as essentially another agent, not just a tool thrown in to solve a specific problem, could ensure greater overall organisational outcomes.  Most organisations learn and adapt even without adhering to prescribed systems-based approaches (Basten & Haamann, 2018), but by applying various theoretical concepts developed through OL, organisations will often learn more efficiently, improve decision-making, and achieve better outcomes (e.g. Cheng, Niu, & Niu, 2014; Real, Roldán, & Leal, 2014; Yang, 2007).  Utilising OL tools can also inform the decision-making of how and where to best implement AI solutions.

Another advantage of implementing AI in conjunction with OL is our ability to communicate with individuals or groups of people within an organisation to better understand the ethical impacts of introducing an AI agent.  AI can be a powerful tool to improve organisations when wielded correctly, but the ‘black-box’ nature of AI processes has too often come at deep ethical costs (e.g. O’Neil, 2016; Pasquale, 2015).  It is challenging for engineers to consider all of the ethical impacts of AI application to a human system when they are creating code to solve a specific problem.  By being able to delve into an organisation and speak directly to the human agents that will be working with the AI agents, we may be able to build more ethically responsible application frameworks. 

Each of us has our value system and understanding of what is ethically ‘correct’; when we operate as a collaborative unit, we can explore ethical standpoint commonalities to arrive at a value set that is more suitable to the collective whole.  Collective learning is central to ethically responsible organisational processes (Dixon, 2017) and is a style of learning well suited to OL techniques.

Organisations today need to adapt faster than we can design, test and implement organisational structures.  In most cases, it is not appropriate to replace human learning in an organisation with machine learning, but supporting human decisions and learning with AI can result enhanced organisational outcomes (Bohanec, Robnik-Šikonja, & Borštnar, 2017).  Using AI tools to facilitate and enhance human organisational adaptation is clearly the future that we face, we need to do this ethically and with consideration for system-wide success. 

These thoughts pose the question:

Can we use an understanding of the structures of artificial neural networks to reflect to us how the relationships between people and groups in an organisation are reflections of the structures of societal mind?

Let me put that more simply – can we follow Perseus’ lead and use AI as mirror to view Medusa, where Medusa represents our value sets inclusive of the good, the bad, and the ugly?

In this way I propose that using AI as a mirror of our organisations to see our biases, values, and ethics is the 21st century manifestation of enoptromancy. In the middle of 2020 I think we could use a little magic right now!

Perseus beheads Medusa. Luigi Ademollo (1764–1849). Illustration from ovid’s Metamorphoses, Book IV. Florence, 1832.

This thought piece is based on an abstract I submitted to a Cambridge conference on Kinds of Intelligence and was co-authored with Ross A Coleman and Lina Markauskaite.

Leave a Reply