EXPERT REACTION: Australia signs international AI declaration – what next?

Today, the Australian Science Media Centre released some short remarks from Australian AI experts on the Bletchley Declaration last week. Including a short remark from yours truly (though I’m not quite yet a Dr as the webpage indicates – a handful of months to go for that!).

The AusSMC asked us to comment on the Bletchley Declaration and the Australian Government’s signing of that declaration in under 200 words, which to be honest is really tough on such a complex topic. I do think Australia was right to sign the Bletchley, and I think it was a good step in bringing *some* experts and leaders together. Here is my media quote:

AI Governance in two tracks, the impacts of Now and the risks of Then.

AI helped lift John Lennon’s voice from backing piano music of a 1970s recording to create a new Beatles song “Now and Then”; the same week brought us two important AI governance documents. In the US, the Biden administration released an “Executive Order on the safe, secure, and trustworthy development of AI”. In contrast, the UK’s “Bletchley Declaration” under Sunak’s direction took a different route. Who Australia decides to follow will impact our future AI governance.

Whilst both documents seek to ensure a better AI-enabled world, their approaches couldn’t be more different. The US’s approach is comprehensive and nuanced, reflecting diverse AI-experts’ perspectives: it exhibits a strong focus on human-rights principles, recognising AI as a deeply human-centric issue with wide-reaching socio-political implications. Meanwhile, the UK’s stance is steeped in existential risk rhetoric, seemingly echoing the concerns of a particular faction frequently labelled the AI-Safety community, which tends to concentrate on the long-term implications of potential artificial general intelligence.

These documents echo the AI research community’s polarities: the immediate effects (“the Now”) and the potential future risks (“the Then”).

Australia’s choice in AI governance mirrors the task of isolating Lennon’s voice from its piano backdrop—distinguishing immediate human-centric concerns from the distant hum of existential risks. We stand at a juncture: to tune into the ‘Now’ with the US’s inclusive approach or to anticipate the ‘Then’ through the UK’s speculative lens.

Bec’s Scorecard:

  • The US Exec Order – 4 out of 5 stars
  • The Bletchley Declaration – 2 out of 5 stars
  • Now and Then – 4.5 out of 5 stars”

In my opinion, the UK Safety Summit was a pale shadow to the US Exec Order in both sophistication and nuance. Unfortunately, I have seen too many opinions in Australia already jump on the risk-first based approach such as indicated by the Bletchley doc. Risks are important but they are often narrow, fail to capture larger socio-political driving forces, and risk leaving huge gaps for unknown unknowns.

The US Exec Order shows they have listened to a much more diverse range of AI experts and captured the systemic and human-centric nuances of the issue. Those who know me, know that I rarely advocate the US as the leading example to follow, however, in this case, I do.

The UK Safety Summit came off as a lot of political posturing throwing bling-bling words like ‘foundation’, ‘frontier’, and leaning on the historical ethos of Bletchley Park to prop up what is relatively hollow advice. Why on Earth was the Prime Minister of the UK interviewing a tech, Billionaire?!? The Sunak interview of Musk was so awkward and hard to watch, I feel deep fremdschämen (German for second-hand shame and embarrassment for the actions of another) for Sunak and his misplaced starry eyes.

“Sunak played the role of eager chatshow host, keen to elicit drawled lessons on love, life and technology from the billionaire superstar sitting next to him. “I’m very excited to have you here,” said Sunak, taking his jacket off and leaning forward in his chair. “Thanks for having me,” said Musk, relaxing back into his.”

Kiran Stacey, The Guardian

The Bletchley declaration is steeped in x-risker rhetoric and shows a strong lack of capture of many of the real issues of Responsible AI. The Exec Order, whilst a bit shy of teeth, provides a lot more leading-edge clarity on things like standards in model evaluations (a main flag I am always waving). So, whilst there is no harm per se in Australia signing Rishi’s doc, I hope that our government and policy decision-makers won’t be deluded into drinking this Kool-aide.

Organisations and governments should map out the risks of AI deployment and use. But, they must not allow this one tool in risk mitigation to stand in as a silver bullet for what is a much more complex problem deeply entwined in our social fabric. A rights-centric approach with additional risk considerations is much more likely to place not only ‘humans’ at the forefront, but those who have been systemically oppressed and harmed already by existing societal structures.

I strongly advocate Australian political leaders and organisational leaders to look a lot more closely at the US Exec Order for guidance.

If you check out the AusSMC page, make sure you scroll down to Toby Walsh comments which I fully agree with