
Presenter: Jeff Williams
In January 2024, ACMI launched an AI Ambassadors program with the goal of enhancing staff understanding through practical experimentation. Led by ACMI's Technology Team and primarily involving non-technical staff, the program focuses on demystifying machine learning, exploring generative AI fundamentals, debating ethics and usage, and providing assignments to expand on these topics through action. Jeff Williams discusses practical insights on how to approach knowledge-sharing and foster an environment of experimentation, enabling the integration of AI more effectively into organisations and removing the barriers to adoption.
Technology, language, history and creativity converged in Canberra for four days as cultural leaders gather for the world's first in-depth exploration of the opportunities and challenges of AI for the cultural sector.
Want to learn more about this event?
Visit the Fantastic Futures 2024 Hub
This transcript was generated by NFSA Bowerbird and may contain errors.
Good morning. Before I get started, I would like to acknowledge the traditional custodians of the lands where we are meeting, and I pay my respects to elders past and present, the First Nations people from all lands that are here today. I'm Jeff Williams, Acme's head of technology, and today I'm here to discuss Acme's education and adoption program we're calling AI Ambassadors. And the goal of this group is really to kind of train and educate the entire organization on how to use AI. During my talk today, I'll be drawing from Acme's journey with AI, focusing on the initiative aimed at educating staff to demystify the technology and build a foundation that enables them to work more efficiently and ethically. I'll also outline the principles driving Acme's AI initiatives. I'll be using the term AI a lot today, and as there are many types of AI and a long history of AI-related research and tools, I'll define it up front as modern post-chat GPT, large language models. From yesterday, I guess we referred to that as post-Kraken. And as most of you guys saw, from December 2022 to January 2023, the number of users in ChatGBT increased from 1 million to 100 million. And around that same time, we saw emergence of modern AI tools, making text-to-image tools become widely available, DALI, stable diffusion, and made Journey. And it was very hard to ignore AI at Acme from late 2022. And why would Acme ignore AI? Internally, many view AI as part of Acme's remit. Based in Melbourne, Acme's remit includes collecting and exhibiting various forms of moving image media, including film, video games, and social media, much of which in 2024 involves AI. AI is directly linked to Acme's mission to engage with the latest forms of media. In 2023, we exhibited MIMO Atkins Distributed Consciousness. MIMO's work was equal parts AI and blockchain, and leading up to that exhibition, and following the mass popularization of AI, it became very clear that an organizational-wide program to educate all staff on AI was necessary. So over the past year, Acme has been running initiative to enhance staff understanding of AI through education and practical experimentation, focused on removing one of those barriers, the biggest one, which is understanding how AI works. So far, the AI journey at Acme has been as much about people as technology, and AI divides people. To further complicate matters, how people feel about AI changes as they learn more about it and engage in debates about the ethics of particular use cases. Last year, Acme used AI to create a 3D animation video, something we couldn't have produced in-house. Initially, the artist was excited about the experiment, but recently told me that they admitted feeling torn about using AI due to moral concerns. Let's watch the video, which stands out as the only AI-generated imagery in this deck. you After discussing the artist's change in perspective, it became clear that their discomfort stemmed from a deeper understanding of AI, including how it works and issues like copyright, topics we've seen in the media, debated with colleagues, and explored through experimentation. So with all of ACME's technology skills and experience, use of technology is coming down to ethics, and everyone must have a voice in that conversation. You can't rely solely on technologists to guide you through adoption of this technology. The more non-technologists involved, the better, I would argue, and that's certainly what we've seen at ACMI. We have a technology for which you can't rely solely on technologists to get adoption of. We have shifting AI perspectives as people learn more about AI, especially its ethical implications, their perspective shifts, making it vital to involve diverse voices in this discussion. And to balance this shifting, ACMI has anchored its journey on education. Education is one of ACMI's four AI journey principles, and I'll discuss the other three shortly. At Acme, we've adopted a three-tier approach to AI to engage staff at different levels across the organization. First, we have our AI Ambassadors, which is a fortnightly program where a diverse group of staff dive into AI discussion and hands-on experimentation. This helps deepen understanding and uncover new opportunities for AI at Acme. And the term AI Ambassadors was debated quite a bit, and we've kind of landed at we're not ambassadors for the usage of AI, but ambassadors for the understanding of how AI works. Next, we have AI Lunch and Learns, which is a quarterly session open to all staff and board members. It's a space where AI ambassadors can share their insights and experience, expanding the conversation across the organization. The time commitment on this one is much less than the AI ambassadors group, and it's really important to have different outlets for people that can engage with these conversations with different levels of time commitments. And lastly, our machine knowledge learning base. This resource documents all ongoing AI experiments. We have 28 projects so far, giving everyone access to explore what's happening and contribute to the learning process. And a big part of that machine learning archive is explaining what these tools do and why. This three-tiered approach ensures that everyone has a chance to engage with AI in a different way that fits their role and interests. Driving Acme's three-tiered approach are four key principles, educate, experiment, debate, and deliberate. We've covered educate and experiment, so let's focus on debate and deliberate. Now, I use the word debate here. I think argue a lot of times is probably a better word. Debate is about tackling the hard conversations, job displacement, copyright concerns, data sovereignty, AI unpredictability, environmental impacts, and more. These are very difficult topics, and the discussion can get very heated, but it's very necessary. And after debate comes deliberation, taking time to reflect and think critically about what we discussed before making decisions. These principles reflect ACME's values, and when adopting AI, choose principles that align with your organization's values and ensure that all voices are heard in the process. The AI Ambassador Program is key to fostering continuous learning, experimentation, and debate across departments. With regular meetings, staff can explore AI applications and their roles. Acme, the group, includes about 20 members with typical attendance around a dozen. One important voice consistently represented is from visitor experience, offering invaluable insight into how visitors perceive and respond to AI. And I think when putting together this group, I understood like we needed collections, we needed curatorial, we needed legal, but having the unexpected result of having visitor experience in there and really understanding from our visitor's perspective kind of what the thoughts are and conversations are and discussions about has been very valuable. While not all departments attend regularly, setting aside this time to focus on AI is crucial to shaping its use at Acme. At a higher level, we focus on three core areas, how AI works, how Acme should use it, and the challenges we face along the way. Listed up here is a list of a handful of the topics. We also bring in guests, and in February we had Eric Salvaggio, who spoke this morning, come and talk to staff, which is very important, very valuable. He led a wonderful workshop. We've covered a range of topics, from the mechanics of AI, like transformers and diffusion models, to ethical considerations around AI usage in exhibitions. Initially, sessions were heavily structured with formal presentations, homework assignments, and product reviews, but this approach left little time, little room for discussion, and became a time burden for all involved, really. and it was clear we needed a shift. Now meetings are more interactive, focusing on specific areas while allowing plenty of time for open discussion, making the sessions more engaging. At Acme, we found a rotating presenter approach works really well. We have three key presenters with different focuses. We have one conversation and one stream aligned on ethics and usage. We have another one, which is non-technical, which is about demonstration and experiments working on low-code, no-code solutions and testing. And then we have technical, which dives into how AI works, explaining machine learning fundamentals. And when we talk about how AI works, it's really important to show it and how it's used. And we're not afraid to pull up code and kind of talk through people, especially with those that don't have technical backgrounds on that. We've also simplified the structure by removing homework and now we dedicate 15 to 30 minutes to open discussion. This has led to more balanced sessions, greater participation, and deeper conversations about AI. Today I want to focus on a session we had that is I Am the Golden Gate Bridge. In this session we explored why LLMs like Claude and ChatGPT are often described as black boxes. We listened to a five-minute excerpt of a podcast covering a paper Anthropic had published, and then we had a structured discussion about the article. If you don't know about this research paper that Anthropic published, you can Google Golden Gate Claude. It's very fun, but they basically, looked at prompts into the neural network, figured out how to stimulate the neural network so that whenever you asked the Golden Gate Clawed GPT, it would answer everything from the perspective of the Golden Gate Bridge. During this session, we reviewed the fundamentals of how LLMs work, but this time we dove deeper into why, even though we understand the architecture and algorithms behind them, we still treat them as unpredictable. This session was important because it reminds us that while AI can be simplified as large statistical predictors, its scale and complexity make it a black box to even those who build the models. The unpredictability doesn't necessarily mean we can't trust the models or trust their outputs. Struggle with the word trust there. But it does mean we need better tools to interpret them. This also informs how we test AI tools. Testing of AI tools is a very It's a great conversation and probably a wonderful presentation all on its own. Our AI Ambassadors group has made great progress so far. First, we've seen increase in the understanding of how AI works across the group. AI Ambassadors are now using AI tools more regularly in their daily tasks, which shows a growing understanding of how to effectively use the technology. And one of the most important developments is that the discussion around AI ethics and usage have become much more nuanced. This has allowed us to approach AI with more thoughtful and informed perspectives. In reflecting on the AI Ambassadors Group, we've learned some valuable lessons. First, including diverse non-technical perspective is essential. Consistency is very important as well. We've also found that time open for discussions is critical. I think I mentioned this earlier, but early on we rushed to try to get through the work and didn't have enough time for debates, and now we structure nearly 50% of the time on debates. Finally, time commitment is a big factor, and to help do that, we've reduced the workload to make participant more manageable for everyone to remove that blocker. The AI Lunch and Learn series is another key part of ACME's education strategy. We hold this four times a year and these sessions provide an informal space for staff and board members interested in AI but unable to commit to fortnightly ambassador meetings. With an average attendance of 41 staff per session, these meetings have become a great platform for AI ambassadors to share the work and discuss AI topics to the wider team. We've covered topics from how AI works to ethical frameworks to experimental tool demonstrations. At a recent AI Lunch and Learn, we focused on an internet collection chat explain, and this is one that Acme developed in-house, explaining how it was built, and there was a live demonstration. This session followed two weeks of staff testing, the chat bot internally, plus a public demo during an open house where visitors could use this tool. We logged all interactions to better understand its use and effectiveness. We explained embedding search, retrieval, augmented generation, and prompt engineering, the three techniques used in our chat, but to inform it about Acme and our collection. Finally, we did a live demo, followed by an open discussion. This gave staff a hands-on experience with how AI can be used to make our collection more accessible to visitors. And the live demo was quite fun. We had people kind of shouting out search terms, and the conversation led to a pretty deep conversation about ICIP. In our AI Lunch and Learn series, we've learned a few key things. First, balancing time equally between presentation and discussion helps keep sessions interactive and engaging. Second, pairing explanations with live demonstrations make the text easier to understand. And finally, clear, concise explanations showing the application code where necessary helps demystify AI, making it more accessible for everyone. The third part of our approach is our machine learning knowledge base. This is a shared resource at Acme, allowing all staff to explore AI experiments and contribute to the learning process. This is a central repository, ensures everyone, from technical, non-technical staff, have access to ongoing projects. Machine knowledge learning base has been a valuable tool for sharing these experiments. And what we've learned so far is the regular internal communication is critical to keep staff informed. Making the knowledge base accessible to everyone gives all staff the chance to explore, learn, and engage with the experiments. And the openness helps demystify AI. And it's important to explain why the experiments matter. When staff see how AI tools can be applied to their work, they're more likely to engage and contribute ideas. Overall, these programs and initiatives have been very rewarding. Some of the key takeaways is we've talked about the tiered education program, but that's very important, allowing people to involve at different levels of engagement. Promoting hands-on experiments, encouraging staff to experiment with AI tools and build confidence and discover practical applications. If you don't have developers and can't build your own tools to test, you can also get hands-on experimentation as tools are released by other organizations. facilitate ethical AI discussions, provide a safe space for staff to debate the ethical implications of AI to align its use. And this is kind of back to the arguing. Don't be afraid to have the arguments, but just make sure you listen to everybody. Engage multiple departments, this is engage, involve from all staff areas of the organization, you might be surprised by some of those inputs. And build, to build AI capacity, it's kind of really what we're leaning at here, and invest in AI education to equip staff with the skills for long-term AI integration, driving both efficiency and innovation. And AI tools are advancing quickly, and the sooner you start educating your organization and better prepare, they'll be able to understand and leverage AI for your specific needs. Thanks, everyone.
The National Film and Sound Archive of Australia acknowledges Australia’s Aboriginal and Torres Strait Islander peoples as the Traditional Custodians of the land on which we work and live and gives respect to their Elders both past and present.