This sneak peek of some of my doctoral work simply explains the conceptual ideas beyond one of the chapters of my PhD thesis. The work looks at how bias can be embedded in evaluations of generative AI systems. I gave a prototype of this talk at a “Diversity and Inclusion in AI” symposium at CSIRO’s Data61 this week (organised by Prof Didar Zowghi, Muneera Bano and their team). I had some good feedback on the day, so decided to share the slides here.
What makes this different from my academic writing is that I have tried to make some difficult concepts a bit more accessible by using Dogs & Cats and Cat People & Dog People! My reasoning for using cats and dogs was to lessen the cognitive load on the viewer so they could really get at the underlying concepts. The hope is that people could then take these tools and apply them to human applications.
The slides are designed to accompany me when I’m speaking, so I hope they still convey some decent meaning as just images. I have tried to squish some complex topics into just a few slides with images, as such, I have missed some nuances of the issues. If you think there is something glaring I missed that should be included, please do share with me and I can consider an update for future talks.
Want more insight into this topic? Invite me to speak to your team or organization! Contact me at bec@EthicsGenAI.com or use the form below.
I hope you enjoy these slides. Please remember to cite me if you use them or if they inspire you in your work.
A dog person, but I have plenty of cat people friends 😉