I originally posted this article on Medium.
Reflections from a Generative AI Ethicist at The University of Sydney.Two reports released on 1st June 2023 signal the Australian government is prepared to take AI risks seriously, but it needs to respond more quickly than it has in the past. As with most governments of developed nations scrambling to keep up with the risks associated with AI by considering their regulatory options, Australia finds itself at an important crossroads.In this blog post, I unpack some of what is in (and not in) these reports and give some opinions and recommendations.

The Australian Government’s Department of Industry, Science and Resources released the discussion paper Safe and Responsible AI in Australia. The National Science and Technology Council released the Rapid Response Report: Generative AI. Minister for Industry and Science, Ed Husic, said in a statement that these reports indicate the Albanese Government is taking steps to “begin a discussion to ensure appropriate safeguards are in place.” The government has called for feedback via a form, closing on 26th July.Key recommendations:
- The government needs to immediately improve funding to universities for AI-related research. That critically includes AI Ethics researchers.
- Universities should respond ASAP by developing forums for people to come together and discuss the questions in the feedback questionnaire posed by the Australian government.
Key points
Safe and Responsible AI in Australia (Safety report hereafter) summarises global responses into four broad categories:
- 1 Voluntary — i.e. Singapore
- 2 Principles — i.e. UK
- 3 Legal & regulatory — i.e. EU and Canada
- 4 Auditing — i.e. US
The Rapid Response Report: Generative AI (hereafter the RRR report) examines two questions:
- What are the opportunities and risks of generative AI over the next few years?
- What are some other countries doing to address these risks?
Each country and region is in flux right now, deciding what trajectory they will stick to. The discussion these reports are intended to foster is aimed at helping Australia decide which combination of approaches is best for us. Overall I find these reports to be well-grounded, pragmatic, and inclusive. There are some gaps that should be addressed that I highlight here. I contrast the approach to recent AI-risk letters that have taken centre stage. And, I look at what are some of the next steps.Firstly, let’s have a quick look at the four approaches mentioned in the Safety report.
1. Voluntary
The Safety report notes that one option that some countries (i.e. Singapore) are pursuing is encouraging businesses to voluntarily regulate themselves to encourage “a particular set of behaviours and actions”.Voluntary adherence to guidelines is likely to prove ineffective to both strongly profit-driven organisations and bad actors and seems far too soft for anything above ‘low-risk’ cases. The Safety report cites low-risk case examples as computer chess, spam filters, recommender systems and chatbots that “direct consumers to service options”.
Though there are researchers who might argue that recommender systems pose more than “low” level risk in the way that impacts industries and individuals. The office of the e-Safety Commissioner released a position statement in December 2022 that highlights many of the risks recommender systems pose.The medium and high-risk examples given in the Safety report are shown below.

The RRR report notes that the US also employs some voluntary mechanisms in relation to generative AI. “the US relies on self-regulation, which includes public-sector driven, but voluntary multi-stakeholder processes to develop risk management and technical standards”. An approach that aligns with the US values of strong free-market economies.The RRR report states that historically, Australia’s approach to technology has been “through self-regulation and voluntary standards”.
2. Principles
Ethical frameworks provide an important guiding light to organisations that desire to be responsible. In a recent paper from the Principal Ethicist of Hugging Face Giada Pistilli, and her colleagues including Margaret Mitchell, ethical frameworks are defined as having two primary purposes.
“On the one hand, they seek to frame the development of AI systems and, on the other, to guide their proper use”. Pistilli et al., May 2023
Pistilli’s paper highlights some of the challenges around applying ethical frameworks to a rapidly developing technology that sits at the intersection of ethics, the law, and technical expertise. An intersection they note is highly dependent on the context of a “society’s values, beliefs, norms and policies”.A case in point: an Australian Government AI Ethics framework was released in November 2019, but as noted by CSIRO-Data61 director, Jon Whittle in March this year, little has been done to operationalise the principles.
Principles alone cannot guarantee ethical AI and would obviously have no impact on malicious actors. An Association for Computing Machinery (ACM) 2018 study found that ethical codes of conduct had zero impact on the ethical decision-making of software engineers. We should question how much that finding may have changed, if at all, today. See below for that ACM paper’s first listed result.

From “Does ACM’s code of ethics change ethical decision-making in software development?” McNamara et al., 2018
Similarly, the Safety report also recommends the development of a “practical checklist for Australian Government agencies implementing AI and ADM systems”. Whilst it is nice to have these checklist tools, it can also be sceptically considered as ethics washing to absolve agencies of guilt and liability.

Guidelines, frameworks, and checklists can be useful tools when used in conjunction with other approaches. However, accountability should never be removed from humans designing, evaluating, or deploying these AI systems. Multiple tools are needed, sitting within robust frameworks that consider contextual sociotechnical environments.
3. Legal & Regulatory
The reports provide excellent summaries of other national and regional regulatory responses.
The EU is taking a strong regulatory stance as indicated by their demand for OpenAI and similar companies to be more transparent about the inner workings of their products. The EU has been setting the gold standard since their introduction of the GDPR in 2018, already 17 other countries have followed in their regulatory footsteps.The proposed EU AI Act takes a risks based approach and looks to regulate providers rather than conferring rights on individuals. The Act seeks to break down risks into four categories and regulate a product for use in accordance with the level of risk it poses.

Whilst many have applauded the EU AI Act there are also expressed concerns. This is useful for Australia as we are looking at a framework that is not only more advanced but one that has already been critiqued. For instance, Human Rights Watch expressed concerns that the EU AI Act “does not meaningfully protect people’s rights to social security and an adequate standard of living.”
“In particular, its narrow safeguards neglect howexisting inequities and failures to adequately protect rights — such as the digital divide, social security cuts, and discrimination in the labor market — shape the design of automated systems, and become embedded by them.” Human Rights Watch
What the EU does show is that despite not producing the largest AI models (the US and China are the biggest producers) it has still indicated its power by developing robust legislation. A role-model for Australia to follow.
In the US, their Senate “launched a proposed regulatory framework to deliver transparent, responsible AI while not stifling critical and cutting-edge innovation.” An approach that would appear to favour economics over social risks.It was indicated that the US Federal Trade Commission released “a statement that it would take enforcement action against biased AI systems under section 5 of the Federal Trade Commission Act” However, at this stage that proposal is only three pages long and clearly very nascent and without any power.
China, somewhat predictably, has pursued a strict rules-based approach and that AI “should reflect the core values of socialism and should not subvert state power” (reported by CNBC). An approach that highlights the potential for AI to increase power by those that wield the technology.A balanced regulatory approach seems the most favourable, though there is much to consider in developing the right set of rules and guidelines for Australia. The Safety report breaks this into three areas:
- A Broad set of general regulations, that are technology neutral.
- Sector-specific regulation.
- Voluntary, self-regulation.
As discussed above, voluntary regulation is fine for very low-risk applications but quickly becomes too soft for many use cases.Context is extremely important when considering advanced AI technologies and “Sector-specific” regulations are critical. The Rapid Response Report (RRR) breaks AI risk into three categories, one being contextual.
Contextual and social risks, including risks to human rights and values arising from AI use in high- stakes contexts (such as law enforcement, health and social services), and risks posed by the more ‘routine’ deployment of AI that reproduces and accelerates existing social inequalities. Rapid Response Report: Generative AI.
Whether AI is to be deployed in an education setting, an elderly care home, an community with specific religious or cultural values, or in the context of mental health, it is likely that the regulations need to be tailored. Critical in this approach is to involve representatives from the sector in the discussion, as well as from individuals that identify with the experience of that sector.
In Australia, we have numerous existing laws that can serve as a stand-in until we develop more specific regulations. The problem with leaning to heavily on existing laws is that they were created for specific purposes that may not efficiently portage to AI uses. As the RRR notes these existing rules can be less useful against social or systemic risks similar to those brought about by social media.Both reports note that an essential concern here is Indigenous Data Sovereignty. There are already many excellent researchers working in this area in Australia that must be consulted in the development of context-based AI regulation.
Indigenous Data Sovereignty is the right of Indigenous peoples to govern the collection, ownership and application of data about Indigenous communities, peoples, lands, and resources. AIATSIS, July 2019
Contextual responses also assist with ensuring we maintain a value-pluralist approach. One of the most valuable characteristics of Australia is its diversity; generative AI represents a significant threat to the safeguarding of that diversity through text and images.My recommendation is that we develop a toolbox that enables different sectors and organisations to develop their own evaluation systems of varying generative AI systems to decide for themselves if the technology is fit for purpose for their use case. This is a matter of bridge-building between AI evaluation designers and the downstream organisations that are considering using these technologies in a way that broadens stakeholder engagement and input. I believe an appropriate government body could facilitate this process by providing an adaptable platform and a guiding framework.
4. Auditing
The Safety report says that the US is actively consulting on “how to ensure that AI Systems work as claimed”. An area that has received surprisingly little coverage.Claims of how AI systems work are based on evaluation tests carried out on the systems. In a recent article in the AFR, I discussed how these tests can contain their own biases and flaws. It is a central topic of my PhD research prompting me to develop new tests. Tests which I have employed on early versions of GPT-3 and then a more advanced version on Google’s PaLM whilst I was working in the Ethical AI team.The lack of independent auditing was recently pointed out in a paper published by Deep Mind in which many of the co-authored are signatories to the latest AI-risk letter including Yoshua Benigo, often cited as one of the “Godfathers of AI”.

Applying standards to how these systems are assessed and subsequent claims made, seems an obvious choice for Australia to make an impact. It is a recommendation I have made repeatedly, most recently to the Commissioner for e-Safety, Julie Imnan-Grant at an experts’ consultation panel on Generative AI. The Commissioner seemed receptive to the idea.
The RRR does mention evaluations but under the terminology “Pre-release testing”, and notes “Developers typically test their application throughout the development process, and especially before release, to identify and amend problems. Due to the higher stakes, model testing is becoming more planned, structured and intentional.” Though it must be remembered that the testing is still designed and carried out almost exclusively within ML community and big-tech environments with very minimal external stakeholder engagement.On the final page of the RRR report, there are five recommendations for ongoing attention.
- New models and services stacked on them.
- “critical evaluations and risk assessments of the LLMs and MFMs that help explicate the nature of training datasets, energy use, compute budgets and pre/post processing activities”.
- Scale and nature of social outcomes.
- Early examples of successful integration into businesses.
- Regulatory developments in other jurisdictions.
I have bolded the second and third points above as I believe they need to be prioritised much sooner. There is nothing stopping us from addressing these items now. In fact, it is an ideal opportunity for Australia to show some leadership.
Ommissions
Perhaps the most glaring omission in these reports is the lack of discussion around the rigorous research work into AI safety and ethics that is currently being conducted in our universities. The only university work commented on in the Safety report, was work on facial recognition led by former Ambassador of Human Rights, Ed Santow. I recommend a database of other academic work on this topic might be developed as a useful tool for policy-makers.Additionally, as mentioned above, there is only a very cursory mention of evaluations of these systems, with no discussion at all of how critical this aspect is in providing the information required to assess any standards that are developed.I cannot stress strongly enough how urgent and important it is that Australian governing bodies bring academic and industry experts in the field of evaluations of AI systems, together to provide actionable recommendations. Message me if you want to hear me talk more on this point!
Decision time
The short of it is, we have a choice to make; are we going to swing to the US approach which is more strongly focussed on a capitalist free economy or the EU approach which prioritises the good of society and the rights of the individual?This is our AI acid test.
Both reports do a good job of covering a lot of the key AI ethics risks, albeit often at a high level. Multiple times they cite the need for “context” which I see as one of the most positive aspects of the reports.Notably (and thankfully) absent is fear-mongering rhetoric around extinction-level events. The reports take a much more level-headed tone, for instance:
“Heightened concerns could create a polarised and unproductive public debate, which may then dominate our responses to future uses of these applications. . .It will be important to maintain an active and informed conversation on the uses and applications of these emerging technologies.”
Perhaps it is a reflection of who we are as Australians; generally less prone to ultra-hype and comparing every new risk to nuclear war. It is refreshingly level-headed compared to the narrative pushed out by X-risk doomers.Australia has its own character, and whilst our values overlap with many of the countries listed in these reports, it is not appropriate to simply adopt in full the approach of any other country or sector. We can look at the US approaches driven by creating minimal impact to economic gain on one end, but I believe very few Australians would be willing to embrace the runaway capitalism that characterises the US.
Equally, the numerous and restrictive rules-based approaches of the People’s Republic of China are also not in alignment with Australian values of higher levels of freedom. The regulatory power of the EU and their rapid response do seem appealing and are worthy of much deeper investigation.Perhaps the most characteristically Australian aspect of these reports is the open request for feedback from the Australian people. I can only hope that the public feedback is taken onboard with sincerity.
The real AI risks.
Both reports provide good coverage of some of the risks and challenges. The Safety report calls out three potential harms at the opening of its section on ‘challenges’:
- “generating deepfakes to influence democratic processes or cause other deceit.
- creating misinformation and disinformation.
- encouraging people to self-harm.”
Leading signatories of this week’s AI-risk letter have themselves cited concerns over systemic social risks due to the impacts of propaganda, misinformation, and deepfakes on democratic processes and social discourse. It is curious though that their expressed concerns in their government testimonies and open letters, carry less weight in their daily operations or indicate a willingness to comply with regulations imposed on them by say the EU.
To put this in context: two weeks ago Altman, the CEO of OpenAI, told the US Senate his product needed regulation. Then when the EU called for AI tech companies to be more transparent Altman threatened to leave the EU (he didn’t). The very next week he signs a letter warning that AI risk mitigation is essential for humanity’s survival. It is hard to take this letter as one of sincere concern and not a publicity stunt to draw attention to his product and moat out competitors.Others (myself included) have wondered what the point of all these open letters are, particularly when they are led by the very people that have the power to effect immediate and significant change — the owners and developers of the technologies.
AI technologies are created by humans. These technologies are owned by people and organisations many of whom do not openly share the details underpinning their technologies. The AI x-risk narratives can be read as shifting accountability from the developers to the product. As with most technologies, whether or not they are a lethal threat, depends on how the people that control them, choose to develop and deploy them. It is ultimately a question about power. Lethal threats involving technology come from irresponsible use by humans.
Case in point: Generative AI models are numerical representations of language, they model language. They are mathematical systems that perform tasks based on the patterns that exist in the data they are trained on. They are impressive for sure, but performing a function like text generation is not an indicator of intelligence lurking below the surface.To think otherwise is to subscribe to a particular theory of mind called ‘functionalism’; a theory that has been hotly contested. Perhaps the most famous and easy-to-understand argument against functionalism is John Searle’s Chinese room experiment. An example that has been around long enough that big-tech CEOs should have picked up on it by now!

I believe the Australian public is thoroughly sick of the hyperbole of terminator-style AI and extinction-level threats, x-risk letters are designed to tap into fear emotions. The only killer robots that will be on the streets are ones that humans in power might choose to put there — unfortunately, that is a plausible threat for those living in authoritarian states, but again the real threat is the people in power deploying the robots.
The threat is ALWAYS about humans and power, technology is just an expression of that deeper issue.
Understanding technology as an artefact embedded within existing social power structures should be more frequently taught in engineering degrees. Universities might consider mandatory reading for computer scientists to include Hannah Arendt, Michel Foucault, Simone de Beauvoir, Bell Hooks, and other philosophers that discuss issues of power and domination in humans.Underneath these technologies are many of the same human problems we have always dealt with. Issues of inequality, the perpetuation of racist and sexist stereotypes, propaganda and misinformation, and centralisation of power. The RRR report does explicitly call out these risks.
“LLM-generated content could also be misused in democratic processes such as parliamentary consultations by creating a flood of submissions to mislead public opinion.” Rapid Response Report: Generative AI
What next?
The Safety report calls for feedback on many aspects. One is the breadth and accuracy of definitions surrounding this topic, a key starting point for any discussion. The lack of standard definitions has plagued the field of AI ethics for years, as is normal with any nascent field.This week the EU and US released Terminology and Taxonomy for Artificial Intelligence providing definitions for 65 terms. I believe it is important that Australia looks at the terminologies of others but has an open dialogue about the appropriateness of our context. Thankfully, this is one of the items the government has called for feedback on.
Responsible AI in Australia: have your say — closes 26 July 2023.
There are 20 consultation questions you can respond to, below are the first 5 questions.

In summary, I think we have a pretty good start to work from. I enjoyed both of these reports and found them to be well-grounded and responsibly reported. I hope that many engage in the feedback process, and I hope that the government listens to all the feedback.I would love to see some different forums where people could get together — virtually or in-person — to discuss the contents of these reports. There is more in these reports than I have covered in this short(ish) blog post. There is, even more, to discuss on how we can operationalise some of these suggestions.
Perhaps the University of Sydney might be kind enough to play host?Update 5th June: I sent a detailed proposal for a workshop around the Government’s call for feedback to the Dean of Science at the University of Sydney. This would be a follow-up to ChatLLM23 held on 28th April and would be called ChatRegs23 — stay tuned!
Perhaps some of that government money earmarked for this cause might find its way to some AI Ethics researchers at our universities, even those of us working in the trenches and in the weeds with generative AI.
As for Australia’s AI acid test: I hope that we choose society and people over money and the economy. The decisions we make now will shape our character for many years to come.
Kindest,
Bec