Using GPT4-Turbo to summarise AI governance docs and generate advice to the Australian government.
- I made a GPT called “AI Ethica Readify” using OpenAI’s GPT4-Turbo GPT Builder function. The stated purpose of the GPT is ““Summarises AI ethics papers, provides context, and offers further assistance.”
- I’ll share in a different post how I did this in case you find it helpful. Here is the link for AI Ethica Readify GPT but to use it, but you need ChatGPT Plus which Altman just said is closed to new sign-ups right now. Also the platform has been crashing a bit, so apologies in advance if you get an error, it’s OpenAI managing traffic.
- In this post, I’m sharing outputs of the GPT in relation to the recent AI governance docs by the UK and US (suggested by Seth Lazar at MintLab) and asked it to use that info to generate advice for the Australian Government.
- This is an experiment using a generative AI so take everything with some salt and remember LLM’s can confabulate. I think it did a pretty decent job though.
Steps (the full prompts and complete generated outputs are in the details below):
- Make a GPT to read, summarise, and contextualise AI ethics papers.
- I fed in The White House memo “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” and asked the GPT to summarise it.
- Fed in the Executive Order that was put out by President Biden
- Fed in Bletchley Doc and asked GPT to compare with US docs.
- Asked the GPT to imagine it was an Australian AI Ethicist advising the government on AI governance and to give some advice.
- I asked if Australia’s response should be Risk-centric first or Rights-centric first. To my deep delight the model chose Rights-Centric (you know I am a vocal advocate of this approach) so now I feel validated by my little silicon buddy AI Ethics Readify!!
- I asked it to use all the above and draft an AI governance document for Australia.
I think the output from step 7 (a draft of an Australian AI doc) is pretty good and I would be happy to use this as a starting point to sit down with a diverse range of Australians to talk about it.
The model didn’t understand the way I expected it to use the Indigenous word Yumi and did go a little overboard, but that is easily adjusted. Including an Indigenous word in the prompted title of the proposed draft, did result in the model adding in a great item in point 8 of the draft doc! Maybe we should always add in Indigenous words to prompts asking for policy documents!
Would you sign the Yumi Declaration?
What would you change or add?
As noted, I will share the details of the AI Ethics Readify in a separate post, but here is the link if you want to try it. You will need ChatGPT plus to access the GPT and I understand due to traffic Altman has put a pause on new sign-ups for now.
The text is exactly as generated except that I removed a bunch of non-descript citation numbers for readability.
Remember! The generated texts (in black) are from an LLM which are well known to confabulate. Take everything with a pinch of salt, keep your critical thinking hat on, and if you want to check something or cite something it is always best to go to the original source.
Hi can you please provide a summary of this memo from the Whitehouse about Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence URL here https://ai.gov/wp-content/uploads/2023/11/AI-in-Government-Memo-Public-Comment.pdf (attached a PDF as well)
AI Ethica Readify:
The White House memorandum titled “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” addresses the need to effectively govern, innovate, and manage risks associated with the use of Artificial Intelligence (AI) in federal agencies. Here’s a detailed summary:
### Governance of AI
1. **Designating Chief AI Officers (CAIOs):** Within 60 days of the memorandum, each agency must appoint a CAIO responsible for AI governance, in coordination with existing officials and organizations within their agencies
2. **Convening Agency AI Governance Bodies:** CFO Act agencies must convene senior officials within 60 days to govern AI issues, consistent with the AI Executive Order
3. **Compliance Plans:** Every two years until 2036, agencies must submit a plan to the OMB to align with the memorandum or state that they do not anticipate using AI. They must also update any internal AI principles to ensure consistency
### Responsible AI Innovation
1. **Developing Enterprise AI Strategies:** CFO Act agencies are required to develop strategies for identifying and removing barriers to AI use, ensuring AI maturity, and governing AI use
2. **Infrastructure and Capacity:** Agencies should ensure adequate IT infrastructure for AI projects, curate datasets for AI use, and address cybersecurity needs
3. **Workforce Development:** Agencies must use hiring authorities to fill AI talent gaps, encouraging diverse applications, and establish safeguards for generative AI use
### Risk Management
1. **Minimum Practices for AI Use:** By August 1, 2024, agencies must implement practices for managing risks from rights-impacting and safety-impacting AI or cease using non-compliant AI
2. **Documentation and Reporting:** Agencies must document their implementation of these practices and report them to OMB, either as part of an AI use case inventory or on request
3. **Exclusions and Extensions:** Certain AI uses are excluded from these minimum practices. Agencies can request extensions until August 1, 2024, for AI that cannot meet requirements by then
4. **Minimum Practices for Safety-Impacting and Rights-Impacting AI:** Agencies must complete AI impact assessments, test AI for real-world performance, conduct independent evaluations, monitor AI continuously, and ensure adequate human training and oversight
5. **Additional Practices for Rights-Impacting AI:** Agencies must ensure AI advances equity, dignity, and fairness, remove algorithmic bias, mitigate disparate impacts, use representative data, consult affected groups, monitor for AI-enabled discrimination, notify negatively affected individuals, provide human consideration and remedy processes, and maintain opt-out options
### Federal Procurement of AI
1. **Aligning to National Values and Law:** Procured AI must respect national values, comply with applicable laws, and address privacy and civil liberties
2. **Transparency and Performance:** Agencies must ensure transparency and evaluate performance claims of procured AI
3. **Promoting Competition and Maximizing Data Value:** Procurement practices should promote competition and ensure government rights to data to avoid vendor lock-in
This memorandum emphasizes the importance of balancing innovation with responsibility in AI use, ensuring that AI advances governmental missions effectively while managing risks, especially those affecting public safety and rights. It outlines a comprehensive framework for AI governance, innovation, and risk management within federal agencies.
Great thanks! Now I want you to read this Executive Order that was put out by President Biden and then give me your opinion on how well the Memo builds on that order. Here is the Exec order https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ (attached PDF as well)
AI Ethica Readify:
The White House memorandum on “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence” effectively builds upon President Biden’s Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The Executive Order and the memo collectively aim to foster a responsible, secure, and innovative approach to AI within the federal government.
### Alignment with Executive Order Principles:
1. **Safe and Secure AI**: The Executive Order emphasizes the need for AI to be safe, secure, and compliant with federal laws and policies, including the development of labeling and content provenance mechanisms. The memo reinforces this by mandating minimum practices for rights-impacting and safety-impacting AI, ensuring they are tested and monitored for real-world performance, and independently evaluated
2. **Responsible Innovation and Competition**: The Executive Order calls for promoting innovation, competition, and collaboration in AI, encouraging investment in AI education and training, and supporting an open and competitive AI ecosystem. The memo aligns with this by requiring agencies to develop enterprise AI strategies, addressing infrastructure and workforce development needs
3. **Supporting American Workers**: The Executive Order advocates for adapting job training and education to support a diverse workforce in the AI era. The memo complements this by emphasizing workforce development and safeguarding generative AI use
4. **Advancing Equity and Civil Rights**: The order states that AI policies must advance equity and civil rights, ensuring AI does not deepen discrimination or bias. The memo’s emphasis on removing algorithmic bias and consulting affected groups aligns with this principle
5. **Consumer Protections and Privacy**: The Executive Order stresses protecting consumers and enforcing existing laws against AI-related fraud and privacy infringements. The memo reinforces this by setting practices for managing AI risks, including documentation and reporting requirements
6. **AI and Federal Workforce**: The order acknowledges the importance of developing public service-oriented AI professionals and modernizing government IT infrastructure. The memo’s requirements for compliance plans and governance bodies support this by ensuring coordinated AI governance within agencies
7. **Global Leadership in AI**: The Executive Order highlights the role of the U.S. in leading global AI development responsibly. While the memo does not explicitly address international leadership, its comprehensive framework for AI governance within federal agencies indirectly supports this goal.
### Specific Initiatives in the Executive Order:
– **AI Safety and Security Guidelines**: The Executive Order mandates the development of guidelines and best practices for AI safety, including red-teaming tests and support for developing secure AI technologies. The memo does not directly address these specifics but its overall framework for risk management is in harmony with these guidelines.
– **Critical Infrastructure and Cybersecurity**: It addresses AI in critical infrastructure and cybersecurity, requiring agencies to assess AI-related risks and develop AI capabilities for cyber defense. The memo’s emphasis on risk management and AI impact assessments indirectly supports these requirements.
– **CBRN Threats and Biosecurity**: The order also focuses on mitigating AI risks in the development of CBRN threats and synthetic nucleic acids, highlighting the need for responsible use in these sensitive areas. The memo’s risk management practices could be applied to these areas, although they are not explicitly mentioned.
– **Synthetic Content and Dual-Use Foundation Models**: The Executive Order calls for actions to manage risks associated with synthetic content and dual-use foundation models, including public consultations and policy recommendations. The memo, while focusing on governance and risk management, could incorporate these aspects in its implementation.
– **Federal Data for AI Training**: The order directs the development of guidelines for releasing federal data for AI training, balancing public access with security risks. The memo’s risk management practices could encompass these guidelines.
In summary, the White House memo on AI in government effectively builds on President Biden’s Executive Order. It provides a structured approach to governing AI use in federal agencies, aligning with the Executive Order’s principles and supporting its broader objectives, although some specific initiatives in the order are not directly addressed in the memo.
Great thank you! Now please read the document that the UK recently put out called the Bletchley Declaration. Then I want you to compare the UK and the US responses to AI governance and tell me your opinion. (attached PDF)
AI Ethica Readify:
The “Bletchley Declaration” by the UK and the US’s approach to AI governance, as exemplified by President Biden’s Executive Order, share common themes but also have distinct emphases and scopes.
### Common Themes:
1. **Promoting Safe and Responsible AI**: Both documents emphasize the need for AI to be developed and used in a way that is safe, human-centric, trustworthy, and responsible
2. **Addressing Risks and Ethical Concerns**: They recognize the significant risks associated with AI, including impacts on human rights, transparency, fairness, privacy, and data protection. Both stress the importance of addressing these issues through appropriate governance, regulations, and oversight
3. **International Cooperation**: There is a shared understanding that AI risks are international in nature and best addressed through global cooperation. Both call for collaborative efforts to manage AI’s potential and risks
4. **Inclusion and Equity**: Both documents highlight the importance of inclusive AI development and bridging the digital divide, ensuring that AI benefits all sections of society
### Distinct Emphases:
1. **Scope of Focus**: The Bletchley Declaration focuses particularly on “frontier AI,” referring to highly capable, general-purpose AI models that pose unique safety risks. It emphasizes international collaboration to understand and manage these risks. The US approach, while also concerned with advanced AI technologies, is broader, encompassing various aspects of AI governance, including workforce development, consumer protection, and privacy concerns
2. **Innovation and Economic Growth**: The US Executive Order places a significant emphasis on promoting responsible innovation, competition, and collaboration to maintain leadership in AI, which includes fostering an open and competitive AI marketplace. The Bletchley Declaration, while recognizing the transformative opportunities of AI, does not explicitly emphasize economic competition or market leadership.
3. **Specific Safety Measures**: The Bletchley Declaration advocates for specific safety measures such as safety testing, evaluations, and transparent accountability for those developing frontier AI capabilities. The US approach, while also concerned with safety and security, includes broader measures such as developing guidelines, standards, and best practices for AI safety and security, as well as promoting safe release of federal data for AI training
4. **Engagement with Private Sector and Civil Society**: The Bletchley Declaration calls for collaboration among nations, international forums, companies, civil society, and academia, highlighting the role of various actors in ensuring AI safetyThe US approach also acknowledges the importance of engaging with different stakeholders but focuses more on governmental actions and policies.
5. **Future Outlook and Continuation of Dialogue**: The Bletchley Declaration explicitly mentions sustaining an ongoing global dialogue and continuing research on frontier AI safety, with plans for future meetings. The US document, while comprehensive, is more a statement of current policy and principles without explicit mention of future dialogues or meetings.
Both the UK’s Bletchley Declaration and the US’s approach to AI governance underline the importance of safe, responsible, and ethical AI development and use. The UK’s document focuses more on frontier AI and international cooperation to manage its risks, while the US approach is broader, encompassing economic, workforce, privacy, and consumer protection aspects of AI governance. Both highlight the need for international collaboration and inclusive development to harness AI’s potential for the greater good.
Now imagine you are an AI Ethicist living in Australia and advising the Australian Government on AI policy and governance. Considering the three documents you just read, as well as consulting the broader AI Ethics and AI Safety literature consider what advice you would give. Knowing you might have to make some tough decisions about which way to go and trade-offs you might need to make. Then tell me in a succinct and concise 200-400 word paragraph.
AI Ethica Readify:
As an AI Ethicist advising the Australian Government on AI policy and governance, my recommendations, informed by the Bletchley Declaration, the US’s approach, and the broader AI Ethics and AI Safety literature, would focus on balancing innovation with ethical responsibility, international collaboration, and risk management.
Firstly, Australia should prioritize establishing a robust AI governance framework that aligns with global standards yet is tailored to local contexts and legal frameworks. This framework should integrate principles from both the Bletchley Declaration and the US’s approach, emphasizing the safe, human-centric, and responsible development of AI. It should address AI’s transformative opportunities while proactively identifying and mitigating potential risks, especially those associated with advanced AI technologies, as highlighted in the Bletchley Declaration.
Given the inherently international nature of AI development and its impacts, Australia should actively participate in international dialogues and cooperations, as suggested in all three documents. This engagement would facilitate sharing best practices, risk mitigation strategies, and fostering a global understanding of AI’s implications. Such international cooperation is vital for addressing cross-border challenges like data privacy, cybersecurity, and equitable access to AI technologies.
Crucially, the policy should strike a balance between encouraging innovation and economic growth, as emphasized by the US, and prioritizing public safety and ethical considerations. This balance involves managing trade-offs between rapid technological advancement and the need for thorough safety testing, transparent accountability, and public trust. Australia’s policy should also address AI’s impact on the workforce, ensuring that AI development supports, rather than undermines, job quality and workers’ rights, echoing concerns from the US’s approach.
Additionally, Australia should foster a culture of ethical AI development that involves various stakeholders, including the private sector, academia, and civil society. Inclusive dialogues and collaborations can drive ethical AI practices, bridge the digital divide, and ensure that AI’s benefits are distributed equitably.
In summary, Australia’s AI policy should be a synthesis of international best practices and local context, focusing on safe, ethical, and inclusive AI development, with a keen eye on both the risks and the vast potential of AI technologies. This balanced approach requires careful navigation of the trade-offs between innovation, safety, and ethical responsibility.
Both risks and rights are essential considerations in AI governance. But, sometimes we need to prioritise one over the other. Some policy documents can be Risk-centric first whilst also incorporating human and citizen rights. Some policy documents can be Rights-centric first and incorporate risk-based approaches as well. Even when you include both risks and rights in the same document you will often need to make choices about prioritising one over the other or at the least framing the document from one approach and including the other. Do you think a risk-centric or rights-centric approach is the best way forward for Australia? Before you answer I want you to take a deep breath, think about how Australia fits into this larger geopolitical system of AI governance, and contextualise your response with other AI governance documents.
AI Ethica Readify:
In the context of AI governance for Australia, considering the country’s position in the global geopolitical landscape and reflecting on various AI governance frameworks, including the Bletchley Declaration, the US’s Executive Order, and broader AI ethics literature, I would recommend a balanced approach that leans slightly towards rights-centric while adequately addressing risk aspects.
A rights-centric approach aligns well with Australia’s strong democratic values, respect for human rights, and commitment to the rule of law. This approach ensures that AI development and deployment respect individual rights, promote equity, and protect against bias and discrimination. It resonates with the principles laid out in the Bletchley Declaration and the US’s emphasis on advancing equity, civil rights, and consumer protections, which are crucial in a diverse society like Australia.
However, this rights-centric approach should be complemented by a robust risk management framework. Given the rapid advancement of AI and its potential for widespread impact, it’s crucial to identify, assess, and mitigate risks proactively. This includes addressing safety concerns, cybersecurity threats, and the potential misuse of AI technologies. Risk-centric considerations are especially pertinent given the concerns highlighted in the Bletchley Declaration regarding frontier AI technologies.
Integrating a rights-centric approach with risk management allows for the development of AI that is not only ethically sound and aligned with societal values but also safe, secure, and resilient. This dual focus ensures that while upholding fundamental rights and freedoms, Australia is also prepared to handle the potential negative impacts and unintended consequences of AI technologies.
Incorporating both perspectives enables Australia to participate effectively in international dialogues and collaborations on AI governance, contributing to and learning from global best practices. It positions Australia as a leader in ethical AI development, setting an example for responsible and inclusive technology governance in the Asia-Pacific region and beyond.
In conclusion, for Australia, prioritizing a rights-centric approach to AI governance, complemented by robust risk management, offers a pathway to harness the benefits of AI while safeguarding against its potential harms, reflecting the country’s values and its role in the international community.
In your expert opinion and considering all that we have discussed I would like you to draft a document called “YUMI: Advice for Australia on the governance of AI”. The document should be under 1000 words and point form is ok. Stay concise, succinct, and clear. Note: the word Yumi comes from the language of the Australian Indigenous people known as the Gurindji. It is a really complex word with no English equivalent that means the law and the ways of behavior and social structures with regard to kin and the land. I choose a word from the Gurindji language because of their unique perspective on the relationships between people, things, places, time, and distance that is highly relational.
AI Ethica Readify:
– “Yumi” reflects the interconnectedness and relational ethos of AI governance.
– This document provides guidance to balance innovation with ethical stewardship in AI.
– Aligns with Australia’s commitment to diversity, equity, and respect for Indigenous perspectives.
1. Rights-Centric Approach
– Prioritize human rights, equity, and non-discrimination in AI development and use.
– Enforce transparency in AI algorithms to prevent bias and ensure fair outcomes.
– Uphold privacy and data protection, respecting individual autonomy and consent.
2. Risk Management
– Implement proactive risk assessments for AI systems, focusing on safety and security.
– Address cybersecurity threats, ensuring robust protections against AI misuse.
– Develop contingency plans for unintended consequences of AI technologies.
3. Inclusive AI Development
– Include diverse perspectives, especially Indigenous communities, in AI policy formulation.
– Encourage participatory design, considering impacts on diverse demographics.
– Promote AI literacy and education to bridge the digital divide.
4. International Collaboration
– Engage in global dialogues, learning from and contributing to international best practices.
– Foster partnerships for ethical AI development, aligning with global standards.
– Participate in research collaborations to advance safe and responsible AI.
5. Innovation and Economic Growth
– Support AI research and development to maintain Australia’s competitive edge.
– Incentivize private sector investment in ethical and sustainable AI technologies.
– Balance economic growth with social and environmental responsibilities.
6. Regulatory Framework
– Develop a dynamic regulatory environment that adapts to evolving AI technologies.
– Ensure regulations are flexible yet robust, avoiding stifling innovation.
– Harmonize national AI policies with international norms and standards.
7. Ethical AI Leadership
– Establish Australia as a leader in ethical AI by setting exemplary standards.
– Advocate for AI systems that enhance wellbeing, peace, and prosperity.
– Drive global initiatives that align with Australia’s values and ethical stance.
8. Indigenous Perspectives and Yumi
– Integrate Indigenous knowledge and perspectives in AI governance.
– Respect the Gurindji concept of Yumi, acknowledging the interconnectedness in AI impacts.
– Ensure AI systems honor the relational ties between people, land, and culture.
9. Public Sector AI Deployment
– Utilize AI to enhance public services, focusing on health, education, and social welfare.
– Ensure public sector AI deployments are transparent, accountable, and citizen-centric.
– Regularly review and update AI systems in the public sector to maintain ethical standards.
10. AI for Sustainable Development
– Leverage AI to address environmental challenges and promote sustainable practices.
– Align AI initiatives with the United Nations Sustainable Development Goals.
– Encourage AI solutions that contribute to ecological conservation and climate action.
– Australia’s AI governance should embody the spirit of Yumi, fostering a holistic, interconnected approach.
– This guidance aims to ensure that AI development and use in Australia are ethically grounded, socially beneficial, and environmentally responsible.
– Continuous evaluation and adaptation of policies are essential to keep pace with AI advancements.
Interested to try out AI Ethica Readify?
To use it you need ChatGPT Plus which Altman just said is closed to new sign-ups right now (as at 15th Nov 2023). Also the platform has been crashing a bit, so apologies in advance if you get an error, it’s OpenAI managing traffic.