First questions: sociotechnial approaches to Ethical AI.

The following post is a reflection response to an article published by Michael Klenk on 15th June 2020 titled “Are We Asking The Right Questions About The Ethics Of Technology?

I recently read an interesting opinion piece titled “Are we asking the right questions about the ethics of technology” (Klenk, 2020) that argued for a different first approach to ethical AI.  I do agree with the author on two of his key points. Firstly, simply adding ethics on top of technology is too simplistic. Secondly, we should more carefully consider a problem before throwing the most advanced tech at it that we have at our disposal. Perhaps what has become infamously emblematic of the inadequacy of the throw-tech-at-it approach is the COMPAS recidivism platform (ProRepublica, 2016) that evidenced poor human decision-making in applying the wrong technology to a social problem.  I agree with the author that more careful framing of the problem and a more considered approach to applying technology is likely to result in more ethically desirable outcomes.

Where I diverge in opinion is at the point the author argues for “Why technology?” for any given problem.  I believe that statement could be better said as “Why this technology, in this way?”  It may be that Klenk and I are in furious agreement, but speaking from different angles. At the encouragement of Maria Luciana Axente, I am writing this short response to expand on my different take on the topic.

To my mind, technology and humans cannot be so easily separated. Klenk does note that technology is as old as stone tools used by early hominoids, but his writing indicates he believes humans and technology as capable of existing as independent items in the modern social context. I don’t think that division is so easily achieved when we are considering complex social problems. I don’t think many social problems can be treated on any appreciable scale without considering the interdependence of people and technology.

Even the individual example of a person on a desert island quickly sees how difficult it is for humans to completely divest themselves of technology when developing survival solutions. Take for example, Tom Hanks in Castaway, a solo human washed ashore an empty island, existing for some days without any technology. That independence from technology quickly dissipates as he uses some sticks and coconut matting to make a fire. From that moment he is part of a system built on the relationship between himself and the tools he used to create fire. Even when on a desert island with only a volleyball for a friend, a person is drawn to be part of a human-technology system to survive.

The deeply intertwined relationship between humans and their technology has made us – and technology – what we are today. The human-technology relationship is fundamental to our current state of humanity and all the complex social systems we have created, including agriculture, money, cities, and now AI. Harari (2015) provides numerous examples of our social development with technology in his best-seller Sapiens. Emery, Bamforth, and Trist of the Tavistock Institute explored the nature of sociotechnical systems in the 1940’s and 1950’s with a specific focus on the mechanisation of coal that negatively impacted the lives of the miners and their families. The work of the Tavistock group highlighted that systems involving both the social and the technical contain both linear and non-linear relationships. Uncertainty and unpredictability are often evident in these systems, an observation that aligns to unintended negative ethical consequences when AI is sometimes applied to social systems. Sociotechnical approaches to human problems can better inform ethically responsible applications of new digital technologies by helping us understand the system we seek to intervene in.

When considering first approaches to solving social problems, we can’t afford to think of any social groups as being disconnected from our hyperconnected web of 21st century of technology.  Whether you are tucked away in an ivory tower of silicon wearing pricey tech-boi sneakers, or living in areas of poverty and disadvantage, your social environments is a by-products of inequitable social distribution of technology. That technology may be as unglamorous yet life-saving as clean toilets and sewage systems, or as flashy as the latest Tesla, but the access to technology and your relationship to it sets the stage for the human problems you experience. The truth is we cannot separate from technology.

Klenk suggests that some problems should be better solved without technology and provides the example: “Life expectancy and quality of life, for example, can be increased by distributing education, health-care, and wealth more equally.” I find it difficult to see how any of these things could be effectively achieved in our current society without the use of technology.

I do agree, though, that adequate understanding of the problem may be lacking before a technology and application method is selected.  Where Klenk and I diverge is that I believe an initial understanding of the problem should include consideration of how technology brought us to the problem in question and how it is currently connected to the social experience. We can’t ignore technology if we want to scale our solutions beyond perhaps a group meditation, a belly rub with your pet, or a walk in the woods. Distributing any benefits including education, wealth, and health care to even a small fraction of our huge populations requires technology. In short, I believe that a sociotechnical based approach to addressing complex social issues is the framework within which we should ask “which technology, applied how?”.

It is not that I disagree with the author’s assertions that we should more carefully frame problems, that is eminently sound advice. I am advocating we go one step deeper to understand how inextricably linked we already are with technology when initially understanding the problem.  To correctly frame a problem, it is necessary to know how it is situated, what are the connected relationships, and also the dynamic patterns. We must look broadly at how the problem was created and what system relationships perpetuate and are perpetuated by, the issue.  In the point of problem framing, I feel that Klenk and I agree, where we differ is that I can only imagine rare exceptions that would not include technology in that system framework.

Perhaps the author had not mentioned the need to consider social and technological interdependence considering it an underlying a priori. Maybe we allow ourselves to be so accustomed to our relationship with technology that we become like McLuhan’s fish, unaware of the water it swims in. Failing to see our pre-existing relationships with technology is failing to see the forest for the trees; not drawing those relationships means we can’t see the whole of the individual. Even if we can hold the concept of “forest” when closely examining a “tree”, we should define the forest’s system interactions to best understand the experience of each tree. If anything, our ethical stumbles when applying digital technologies to social systems should have taught us the necessity to examine our pre-existing relationships with technology. Failure to take this very first step, to explicitly call out critical aspects of the sociotechnical system surrounding the problem, builds in the very first bias.

Klenk provides an example of COVID tracing apps, starting with:

“First came a problem: The COVID-19 pandemic created painful choices between ‘saving lives’ (i.e. protecting people from the virus) and ‘saving livelihoods’ (i.e. protecting people from the economic, social, and psychological effects of combating the virus).”

Already this statement skips the technological drivers for the pandemic. For example, how technology provided COVID-19 with numerous high-speed global travel options at a scale never previously enjoyed by any pandemic inducing virus. We then skipped what sociotechnical systems made some people more likely to come into early contact with the virus. For example, those that can engage in online work during a pandemic are at a significant advantage to those who cannot. The sociotechnical systems we are embedded in and impacted by, create a set of circumstances for unequal experiences of a pandemic. Initial sociotechnical questions may be as simple as who can afford a smartphone. Or perhaps, who chooses not to use smartphones to avoid government surveillance? Some women may feel less inclined to use tracking apps of any kind if they fear the prying eyes of violent domestic partners. We can ask who can more easily psychologically adapt to Zoom calls? Who doesn’t have adequate internet or hardware access for Zoom? Taking time to map, to explicitly call out, differing sociotechnical experiences and consequently differing pandemic experiences should be our first step.

The author then states: “Then came technology: Digital contact tracing.”  Followed by “Then technology was constrained by ethical considerations”. We can see though that we had already made unconscious ethical considerations when we failed to look at the pre-existing sociotechnical systems of the people we ask to use contact tracing apps. Whilst I agree that applying contact tracing apps to the problem may not be the best use of, or even the correct form of, technology, to assume that technology wasn’t already influencing the problem puts us in dangerous ground. As Klenk notes, a better understanding of the inequalities of the problem may help us see that contact tracing apps can exacerbate those inequalities. I am suggesting that we need to understand the inequalities of sociotechnical systems in order to address that.

I did enjoy Klenk’s work and thought there were many good arguments. My disagreement lies in the fact that technology and its impact is already there in virtually every social problem that humans face today. We must consider that pre-existing condition as a first step to ethically framing a problem.  It is true that the latest whizz-bang technology may not be the best solution, but we cannot reasonably imagine widespread solutions in the 21st century that do not employ technology somewhere in the system design. By starting with a sociotechnical systems approach as first steps, we can better frame the problem to understand what tools to use to fix our difficult social problems.

Leave a Reply