Acknowledged for contributions to Australia’s eSafety Commissioner’s report.

In June 2023, I was honoured to be included on one of the expert panels consulting to Australia’s Commissioner for eSafety on the topic of Generative AI (GenAI). In addition to a virtual panel, I was able to provide the Commissioner and her team with my insights into the ethical implications surrounding GenAI via other messaging avenues and through some of my work.

The Office of eSafety has now released a webpage and a report that summarises the advice that myself and other experts gave them, as well as insights that their own team put together. I find the report to be highly accessible, accurate, and well-researched. I encourage you all to take a look and consider holding a discussion in your organisation about the findings and recommendations given in the report.

eSafety Commissioner’s website Generative AI – position statement

Systemic risks also need urgent consideration. Generative AI is being incorporated into major search engines, productivity software, video conferencing and social media services, and is expected to be integrated across the digital ecosystem.

Office of eSafety Commissioner, Tech Trends

Value Pluralism

Below I have highlighted a few items from the report, but this is by no means exhaustive. I do recommend you take the time to have a look at the report yourself, especially if you are concerned about how we might responsibly manage GenAI in our Australian environment.

I was very grateful to see many of my recommendations make it into print in their report, including those around improving evaluation metrics around these systems and the need for value pluralist approaches. My early doctoral work on GPT-3 (The Ghost in the Machine has an American Accent) specifically addressed the need for value pluralism when assessing reflected values in GenAI models, and it is really rewarding to see my work that in 2021 could seem a bit abstract, now translated into a government report. My work building the “World Values Benchmark” conducted whilst I was at Google addressed the need for us to think differently about evaluations and create some that were more value neutral. I am pleased to see these messages echoed in the eSafety report.

“Other opportunities include developing models that draw on a wide range of perspectives and establishing evaluation metrics that actively address racial, gender, and other biases while promoting value pluralism. Adopting holistic evaluation strategies is crucial for addressing a range of risks and biases.”

A reflection of my recommendations on the expert advisory panel shown on page 16 of the eSafety report “Tech Trends position statement: Generative AI

Recommendations I made in my article “Australia’s AI Acid test” published on the Medium, were also included in several places in the eSafety report. In that article I note “My recommendation is that we develop a toolbox that enables different sectors and organisations to develop their own evaluation systems of varying generative AI systems to decide for themselves if the technology is fit for purpose for their use case.” (Johnson, 2023). My doctoral work that builds evaluation metrics based on descriptive ethics rather than prescriptive ethics is one example of how I am contributing to solutions in this area.

“Risk management should be tailored for each situation. This is especially important when models are made for other uses because early design choices can increase risks. For example, developers could work with evaluation designers to give organisations tools to develop their own evaluation systems that help them understand if AI is suitable for them.lxxvi”

Recommendations cited from my Medium article, page 23 of the eSafety report.

The release of those government reports prompted me to run a follow up event to my April ChatLLM23 conference. The July event was a think tank of AI and governance experts brought together to discuss the issues raised and questions posed by the Australian government – it was called ChatRegs23. One of the invited guest speakers was Kelly Tallon, Executive Manager, Office of E-Safety Commissioner. The Commissioner, Julie Inman Grant, has been very supportive of my endevaours to foster broader conversations around this topic and I am very grateful that Kelly Tallon was able to be a part of ChatRegs23.

Indigenous data sovereignty

Another concern I raised which made it into the report was around the impact on Indigenous groups and Indigenous data sovereignty. Perhaps this was also raised by other experts the eSafety Office also consulted with, and I am sure their own team also discussed and flagged this important issue. I am glad the authors took the time to highlight this critical issue.

One specific concern raised during eSafety’s consultations for this paper was Indigenous data sovereignty and representation. If AI models are developed only to reinforce English-speaking, western values, they may not be effective, safe, and culturally appropriate for diverse users, including First Nations people. Conversely, generative AI technologies hold great potential to preserve Indigenous cultures and languages. To do this, it is important to respect the rights of individuals and communities to consent to the collection and use of their data.

page 20 of eSafety report.

For my part, I would like to particularly acknowledge Jesse King from the Aurora Foundation for insight he has given me on this topic, both at a Google Australia conference in 2022 and via 1:1 messaging. Jesse King has suggested that questions people and organisations might ask themselves when using GenAI are:

  • Does your dataset contain Indigenous data? Can you answer that?
  • If so, how have you considered consent from an Indigenous soverignty perspective?
  • Are you able to demonstrate how consent was received?
  • What can you do to mitigate a re-colonisation of Indigenous people in the Information Age?

Jesse King

Jesse further went on to explain to me that “information relating to Indigenous peoples that are in the public domain were often conducted with huge power disparities and may contain material that is considered culturally offensive or inappropriate. How a LLM responds to these challenges should be designed in collaboration with Indigenous experts and communities” (King, private correspondence). Jesse also shared a link to the current Australian standard for Indigenous Data Sovereignty, the Maiam Nayri Wingara Principles which I highly recommend as an excellent resource.

A potential good use of GenAI that I shared during the panel is in helping with language preservation which was something I have heard about from Aotearoa (New Zealand). I had the opportunity to hear about this work from Dr Karaitiana Taiuru at the conference I organised in April called ChatLLM23. You can see Dr Tairuru’s presentation around this topic in this recording at time-mark 1hour 23 minutes.

I do feel that the eSafety report did a good job of flagging Indigenous concerns for further discussion as well as note there are some potential advantages. I think that the recommendations given later in the report set up well for further discussion with Indigenous representatives on governance recommendations of managing GenAI to be inclusive and cognisant of Indigenous rights. An essential concern, especially as we are about to vote on the Indigenous Voice to Parliament.

Regulatory oversight

I strongly agree with the report recommendations that we need a regulatory oversight body. I am sure many of the consulted experts gave this same recommendation.

“The role of an oversight or authorising body responsible for assessing whether generative AI models meet a certain standard of accuracy before they are accessible may also need to be considered.”

Page 23 of the e-Safety report.

In a recent talk I gave the example that when we drive a car, take a flight, or use a household appliance, we do so with some assurance that our government regulatory bodies have already set safety standards and that adherence to these standards is frequently checked. OpenAI stated on the release of ChatGPT that they knew there were problems, but they wanted us (the world) to test the model out for them. Can you imagine if Qantas said this? ‘Hey everyone, we’ve got a new plane, we haven’t thoroughly tested it out yet because no one is over-seeing that, but come along for a flight and let’s just see how it goes. Looking forward to your feedback if it crashes.”!

Many avenues of risk and bias

On page 5 of the report, there is a diagram explaining the GenAI lifecycle. The report recommends that risk be assessed at each step. An approach I whole-heartedly agree with. Considering risks throughout the entire lifecycle of GenAI is essential. Too often risks have been focussed on training and toxic outputs. Those two things are important, but we need to make it clearer to everyone that there are many avenues where things can go wrong. Diagram 1 from the eSafety report does a great job of highlighting some of the additional avenues where risks and rights should be considered.

Reproduced from Page 5 of the eSafety report.

The reason we need to assess at multiple steps if we are on a responsible track or potentially creating harm for some people is strongly related to work that I am conducting on avenues of normative biases into the GenAI system. The next two diagrams below are taken from a recent slidedeck I have been using for public talks. You can view the whole deck from a link on my home page.

From Bec Johnson’s slide-deck “Generative AI is Human” showing how normative biases can enter the GenAI system from many different avenues.

Each of these blue bubbles represents a wide door of potential normative biases to enter. For example, we need to consider Fine-tuning in both human reinforcement learning (RLHF) and machine reinforcement learning (RLAIF), as well as combinations of those. We are already seeing models being trained and fine tuned on machine generated outputs, there are many significant concerns in this point alone.

The bias embedded in our prompts is another avenue of normative bias. How we phrase a prompt, the spelling we use, the syntax and grammar, all influence what outputs these models give. I cannot state strongly enough how hyper sensitive these models are to deeply embedded bias in prompts, much more so than is usually considered or discussed.

Of course, bad actors will also choose to use toxic prompts to achieve harmful and abusive outputs. An example of how prompts can be used to push the model to act irresponsibly is given in the eSafety report on page 13 where specific case studies are noted such as using prompting to create sexually explicit imagery of children.

Then we must zoom-out and consider where GenAI systems reside in relation to our human and natural world. The greater sociotechnical, socioeconomic, and cultural contexts of where normative biases can arise and how these GenAI systems sit within these systems needs to be addressed. For example if risks are catalogued according to current dominant laws and policies, political environments, and existing inequalities we may miss important ethical considerations of marginalised groups.

Frrom Bec Johnson’s slide-deck “Generative AI is Human” showing the broader contexts that influence normative biases that become embedded in GenAI systems.

Safety by Design

Synergistically with my own approaches, the eSafety report also builds out their first diagram with a second one focussing on Safety by Design Interventions. The complete diagram is quite large so I recommend you view the whole thing in their report on pages 27-28. It is an excellent framework and one that could be adopted by organisations to be tailored to their needs.

I do like that risk assessments is just one part of the overall approach. I hope that other parts of the Australian Government that are leaning towards more risk-centric approaches take note of this. What this section of the eSafety report shows is systems thinking around a complex problem.

Zoomed in image of Diagram 2 from page 27 of the eSafety report builds on their first diagram and provides advice on Safety by Design Interventions. I recommend you view the original in the report for a complete picture.

Due to the need for brevity and succinctness of the report, I feel there is much scope to pull-apart each of these steps and discuss them in greater detail. For example, prompt testing and design (page 29) is an entire topic on it’s own (at least in my PhD thesis 😉 and guidelines on how an organisation can examine the normative biases embedded in their prompts is certainly on my list of deeper, second-level recommendations. Models are hypersensitive to prompt nuances and permutations and our biases that are embedded in prompts will nudge model outputs to align with those normative perspectives.

I especially like the call out for community consultation. Too often GenAI development is kept behind closed doors, including evaluation design processes, task and goal setting, and details of fine-tuning. Even with the very best intent, it is not possible for people inside tech environments or even government environments to think through all the impacts, challenges, and risks that GenAI deployment may pose to the rich variety of communities that make up our societies.

I think the authors of the report have done a splendid job of presenting a lot of the nuances of this very complex sociotechnical issue. Organisations should use this simplified framework as an overlay on their own environments.

Risk-centric, Rights-centric, or Rights-Risk balance?

The need for expanding our view of risks and biases beyond training data and western-normative values and perspectives is critical in any approach to responsible AI development and deployment. Context and broader forces at play need to be considered at multiple parts of the system. I feel that the eSafety report has provided a good point for furthering this discussion. Whilst the report highlights the need to consider risks, it also draws in considerations of human rights, value pluralism, and inclusion.

It is because of the need for contextual approaches at each step – including future unknown unknowns – that I believe a “risk-only” approach will never work as we need to consider pluralist human values and essential human rights. I have heard water-cooler talk around the traps (not at the eSafety office) that the Australian Government is pursuing a risk centric approach and as such other organisations will follow suit. I note on page 22 of the eSafety report it is stated that rights-based approaches are other possible avenues. I would like to see more discussion exploring this rights-centric approaches that strongly highlight inclusivity and diversity.

Other approaches such as a rights-based or principles-based models can also offer benefits, such as inclusivity in regulating AI.

Page 22, eSafety report

There are quite a lot of strong proponents of rights-based approaches to be paired with, or to even dominate risk-based approaches. At the moment, I feel that some other Australian reports are risk-centric with some human rights considerations included. You may feel that this is a subtle difference, but from my perspective it is critical to fully address our choices here.

I think it is a harder road to balance rights and risks, but in my opinion a better road in the long run. We can never list all the potential risks, and even if we could, this approach will always skew towards utilitarian ethics rather than virtue ethics, or even moral value pluralist ethics which sit outside of normative ethics.

The eSafety report advises including a rights-based approaches several times in what appears to me a rights-risk balanced view. One example is in the Safety by Design section where the report provides advice on how to ensure user empowerment and autonomy.

“The principle of user empowerment and autonomy emphasises the dignity of users and the need to design features and functionality that preserve consumer and human rights. To promote equality in society, platforms and services must engage with diverse and at-risk groups to make sure their features and functions are accessible to all.

Page 30, eSafety report.

This is an excellent section to unpack and explore further. I think there is a lot of potential here to undertake some community engagement to discuss some specific examples of how this approach would work for different groups and combine that with some risk assessments at the same time.

Despite the EU and Australia currently leaning toward risk-based approaches, I encourage the reader to look further to see how rights-centric approaches and risk-rights balanced approaches are advocated by different scholars and policy makers. For example, Chatham House, the Royal Institute of International Affairs (a world-leading policy institute in London) recently released a report “AI governance and human rights Resetting the relationship” in which they recommended companies incorporate human-rights based approaches. An article titled “Operationalising AI governance through ethics-based auditing: an industry case study” (Mokander & Floridi, 2022) addresses some of the pitfalls to avoid when considering risk-based approaches such as “the difficulty of harmonising a ‘risk-based’ approach across an organisation that encompasses different understandings of ‘risk”. The US WhiteHouse indicated they will pursue a rights-based approach in their “Blueprint for an AI Bill of Rights” noting that progress must not come at the “price of civil rights or democratic values”. And, an article by policy strategists titled “The EU should regulate AI on the basis of rights, not risks” further explores where risk centric based approaches might fail.

Another excellent paper I recommend is by eminent scholars Vinodkumar PrabhakaranMargaret MitchellTimnit GebruIason Gabriel titled “A Human Rights-Based Approach to Responsible AI (2022). In this paper the authors note “we put forth the doctrine of universal human rights as a set of globally salient and cross-culturally recognized set of values that can serve as a grounding framework for explicit value alignment in responsible AI. . .We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.” (Prabhakaran et. al, 2022). In a recent Linkedin post, one of the authors of that paper, Iason Gabriel of Deep Mind, summarises some key points, including:

1. It can help address the “global value alignment gap”.

2. It can help address the “responsibility gap” around AI design and deployment.

3. It can help bridge the “communication gap” between computer science and civil society.

Iason Gabriel, Linkedin, July 2023

Read the full Linkedin post here.

What do you think?

Perhaps all of this is something I should address in a separate post? Dear readers, is that something you are interested in? Would you like to leave your thoughts at the bottom of this post to further the discussion of rights versus risks as the central focus? What is your preference?

  1. Risk-centric
  2. Risk-centric with supplemental rights-focussed considerations.
  3. Balanced risk-rights approaches.
  4. Rights-centric with risk-focussed frameworks built on top.
  5. Rights-centric

What I do want to highlight here is that not everyone agrees that risk-centric approaches are the silver bullet and even if the Australian government and some large organisations choose that path, does not mean we should consider rights-centric approaches closed.

On this topic I think it would be especially useful to have a deeper conversation with Indigenous representatives about where risk-centric or a balanced risk-rights centric approach is seen as more appropriate to Indigenous communities.


Overall I find the eSafety report TechTrends: Generative AI to be an excellent starting point for an organisation to have a serious internal discussion about how they can responsible adopt, deploy, and manage GenAI in their environments. The report is balanced in tone, well-researched, and highly accessible. The report provides both real-world case studies to learn from as well as pragmatic recommendations. I believe this report is essential reading for all organisations.

We are still in the early days of GenAI; even though it may feel that the recent progress has happened at lightening speed, there is more to come. The conversation must continue and I urge as many people as possible to get engaged. Responsible adoption of AI is not just a discussion for government policy-makers, AI-developers, and lawyers, everyone should feel included and safe in voicing their concerns. Everyone needs to be a part of this conversation, most especially the marginalised and those already experiencing the impacts of social inequality.

A huge thank you to Commissioner Julie Inman Grant and her team for the report and for including so many of my recommendations. I hope to continue the discussions in the future.

I am opening comments on this post – but be kind and respectful even if you disagree. Dissent is welcomed when articulated with coherent reason.

Leave a Reply