
Panel Members: Kartini Ludwig, Eryk Salvaggio (Hosted by Meagan Loader)
Generative AI remains a divisive topic among artists. On one hand, the nature of the machine presents us with unique transformations and speed in execution. On the other, the mechanisms that enable these feats spark concern about copyright and the role of the artist in the 21st century. Join Meagan Loader (NFSA) in conversation with artists Kartini Ludwig (Kopi Su) and Eryk Salvaggio (metaLAB (at) Harvard University) as they talk creative critical engagement, datafication, ethics and the possibilities when art and sound-making meets AI.
Technology, language, history and creativity converged in Canberra for four days as cultural leaders gather for the world's first in-depth exploration of the opportunities and challenges of AI for the cultural sector.
Want to learn more about this event?
Visit the Fantastic Futures 2024 Hub
This transcript was generated by NFSA Bowerbird and may contain errors.
Thank you so much and we'd also like to acknowledge that we meet and have the chance to think and dream together here on Ngunnawal and Ngambri country and we pay our respects to elders past and present and to any First Nations people in the room here this morning. And we're kicking off day two of Fantastic Futures, as you just heard, with two fantastic artists. Institutions have such a long history of learning from the creative mindset of artists, the curiosity and the risk-taking of artists and artistry. Art, ideas and new technology moves so fast in the hands and the hearts and the heads of artists. And it's really much faster than institutions are generally able to. As we saw yesterday with Ben and Andrew's presentation about J.M. Coetzee's, I hope I said that right, poetry, generative poetry work in the late 60s, which I'm still thinking about this morning. Kartini and Eric have been creating and experimenting with generative sound and image AI as artists, as researchers, as producers and professionals years before most of us were asking mid-journey to generate an image of the Pope in a puffer. This morning we're looking at the creative critical use and deliberate misuse of AI systems by artists and how glam practice can be influenced by the often different thinking and always forward thinking work of artists and art. So, I wanted to start by asking both of you what brought you to this work. Kartini, I'll start with you. Yeah, thank you for having me. So, I really got started, it was probably around 2018, 2019. I made my start working with Google's Creative Lab at the time, working under T. Uglo and Jodie Richards, who were part of that. And at that time, we were really working around experimenting. They had a format that was about experimenting with new technology that was coming through Google at the time, which was a real privilege to get your hands on some pretty early technology. But their formula within the creative lab was all around how we can use these tools to unlock new ideas, specifically around arts and culture and like working with artists to see how we can create assistive, I guess, yeah, products or ideas. And that kind of ranged from anything really, it could have been AR, we did some spatial audio projects, but we essentially landed on working on a series of projects called Machine Learning Tools for Artists. And the first of those tools was back in the GPT2 days, which was pre-touch GPT, and we created a series of prototypes called Machine Learning Tools for Writers, and following that we did one called Machine Learning Tools for Musicians. We got our hands on an early model at the time in the musical context called DDSP, which was a style transfer kind of model, which enabled you to transform, say, your voice into something like a violin, which, as you'll see as we go through some of the work of where I've landed, had a lot of influence over what I thought was possible going forward. So yeah, that's where it all kind of started and has had a lot of influence over where we eventually founded Copysu in about 2021. Copysu Studio, which as Ingrid was saying, it's more of a creative digital studio. We've been around for a few years now working. with different clients and collaborators, with that continuous theme around working with arts and culture sector. You know, we are a business, so we have been working on a lot of like UI, UX, prototypes and products. And more recently, for example, we also have been working with Creative Australia on a tool called Think Digital, which is more of a tool to empower arts organizations to assess their digital capacity. So we have one arm of the business, which is very much exploring those kind of, I guess, commercial products, but still working with arts and culture. But when we get the opportunity to do what we really love and are able to fund, I guess, some of our own work or get funding, we do tend to explore spaces more within how AI can be used as a tool to unlock creativity for artists. And if you just share the next slide, that's sort of where we've landed now. This is my sweet little timeline of that, all those bits and pieces, and some of the people we're currently working with. And then now we're working on something called Coo Music, which is all about creating, controlling, and customizing your own AI music for artists. Thank you. Eric, how about you? Yeah, so I've been an artist for about 25 years now. I may need to update that. But I began working as an internet artist. And this was back in the day where the internet sort of just started happening. And it was really, for me, about getting into the code, figuring out what the browser was doing in the sense of how it was delivering information, and how we could get under its skin and kind of make it do things it really wasn't intended to do. And so these days, a lot of this might look a bit trolly or obnoxious. But at the time, it was kind of figuring out a medium that was happening in real time and figuring out what the rules were, making our own rules, and then breaking those rules. And at the same time, thinking critically about the technology, the sort of stories of the technology of the internet, what was going to happen, what was going to change, and our relationship to those changes. From there, this connection between the practice of art making and the sort of critical thinking about technology became really infused. So jump ahead to about 2018 or so, and we start seeing these data sets being collected and being used to produce these types of artworks, visual artworks, text. And I'm really interested in this, because a lot of this stuff is coming from the internet. And I'm thinking, this is net art, but it was called AI. And I began training or sort of superficially training some models. A lot of sort of engineers would say you didn't actually train a model. But I began sort of fine-tuning models on my own collections, my own archives, my own data sets, and trying to figure out what exactly was happening with this technology. How is this technology steering us in certain directions or away from certain possibilities? And so that critical mindset that came into NetArt sort of seeped into this approach to AI. And I found it really inspiring because there are all kinds of different ways of getting into these systems. There's so many different types of systems. And there's also a really predominant kind of thought, there's a kind of imagination around AI that I find... needs some standing up to, needs some challenging. And so I see myself as this sort of artist, activist, working as a kind of counterbalance to these mythologies of artificial intelligence that come from Silicon Valley and, you know, the usual suspects. So a lot of my work is quite critical, quite experimental, but hopefully thoughtful. I'd love to go a bit deeper on a couple of your recent works, Eric. Why don't we stay with you because of you. Segway. Do you want to talk us through this award-winning piece of work that you've created this year? Yeah, so this is a piece that was created in collaboration with Dr. Avijit Ghosh, who's at Hugging Face. And we started talking about this at South by Southwest Sydney. We did a panel on AI and art in about 20, I think it was 2021. And we were interested in the story about Henrietta Lacks. I think the video will tell a bit of that story, but just to kind of explain what you're seeing, and then we'll let the narrative take on the role of telling that story. It's an AI-generated video of a woman named Henrietta Lacks. You'll hear more about her in a minute. And I'm interested in working with noise. I'm interested in working against the sheen, the realism of artificial intelligence and its generated imagery. I really want to steer into something that's sort of unique to the machine as a way of saying, this is a new medium, right? This doesn't have to be a reference to more traditional mediums. There's ways of exploring the aesthetic capacities of these tools, but also to sort of challenge what exactly are we looking at? And so what we're seeing is not an image of Henrietta Lacks. I'll put it that way. And maybe we can look at the video and we can pick it up from there. Yeah, sure. Sam, do you mind please playing that? Thank you. In 1951, Henrietta Lacks went to a medical clinic for an examination. At that examination, her cells were taken for research purposes without her knowledge. The night after the cells were taken, Henrietta Lacks went dancing, but cancer ended her life just three months later. However, her cells continue to live, mutating and reproducing far beyond the expectation of human cells. The subject of medical fascination, these cells lost their connection to the name of the woman whose body they were extracted from. The cells became biological data rather than a piece of someone's body. This portrait of Henrietta Lacks is made from tiny pieces of data. It isn't a photograph of Henrietta Lacks. It is a portrait made of portraits of Henrietta Lacks. Each piece of these images are stripped down to the base cell, pixels reassembled in infinity. This rearrangement creates new assemblages of Henrietta Lacks that do not reference her actual image, her actual body. It is an extractive portrait. Act II. These are images of Henrietta Lacks' cells alongside an AI-generated image of those cells. For centuries, surveillance has been tied to physical bodies. Today, it is tied to bodies as well as to the abstraction of bodies. Much of the information in an AI-generated image dataset is taken without our knowledge. Not from bodies, but from memories, communication, archives. So that's a piece, it's called Because of You, the track that you hear, the music track underneath it is a slowed down version of, what is it, Tab, I wrote it down, Tab Smith's Because of You, so the title comes from this song. And the idea of it was to think about the ways that AI, as we talk about AI, whatever that is, and we can get into that in a minute, where the sources of the information are coming from. It's often taken without our knowledge, without our consents, and really though, fundamentally, it's also sort of taken away and removed from a context of who we are when we share images online, when we tell stories online. And so we started talking about this to make this piece, and we wanted to draw that connection between this abstraction that happens when things become data, when things become this very specific, narrow slice of information. And in the piece, the narration is done by Avijit Ghosh, my collaborator. And what we actually did is fed about 16 seconds of him talking into AI vocal sort of mimicker, and then typed out the scripts. And it's narrated by the AI. But what actually happens, and we get to this by the end of the video, is that his accent is completely removed because he's drowned out in the sea of North American training data. And so this accent, the very specific accent to Bengali culture, is removed. And so this piece of his identity is removed. And so we're trying to play with that, play is kind of a light term, but we're trying to explore that space of what's lost in this datafication and abstraction. Can I just ask one quick question on how close, I don't know who Henrietta Lacks is, but how close is the actual imagery of Henrietta to the image you fed it? We didn't feed it any image, we just prompted Henrietta Lacks. And so as a result of this, we got this woman who looks, you know, we've done some things to it, but the image that resulted was essentially a black woman in a business suit from 2024. So it knew certain things about her, right? But the only reason anyone really thinks that this is Henrietta Lacks is because there's not that many photos of Henrietta Lacks, and essentially because we've told you that this is Henrietta Lacks. And this is, I think, an important piece of that relationship with artificial intelligence, is who's structuring the narratives that emerge from it? In this case, it's me. Like, I am manipulating the audience. And at several points, I'm revealing that manipulation. But that manipulation is occurring through the AI system, right? It's really the AI that is manipulating everyone. I'm just sort of using it and then revealing that, pulling back the curtains on it over the course of the video. The idea of abstraction and absence through datafication is something you've gone much deeper on this year as part of your fellowship, your Flickr Foundation Research Fellowship. Can you talk us through some of the work that you've done this year then? Yeah, so I began working with the Flickr Foundation and thinking about something that I think about a lot is that when we talk about AI, what we're really talking about is an infrastructure. There is an infrastructure of AI. It is training data. It is GPUs. It is the maintenance of that data. It is water. It is power. There's training data sets. There's calibration data sets, right? You talk about benchmarks. What are you benchmarking against, right? So we began looking at Flickr as its role in this sort of architecture. But I also began contrasting that with what I call infrastructures of memory. Because an infrastructure of memory, one of the things I think that's really been surfaced for me in this work is an archive is not necessarily preserving memory. It's a site where memory can be activated. Memory is... It's re-inscribed. We have to remember, right? A memory is not stored in a box. That's a computer metaphor. Memory gets rewritten every time we expose ourselves to a story or a place or a photograph. And the archives are keepers of that practice of memory. And when you think about this infrastructure of memory, and you think about this infrastructure of AI, they're actually wildly incompatible. And I think that there's something really important in thinking about the translation of an archive into a data set. There's something that gets lost when we do that translation. It's like plugging, as I have learned, plugging a 220 volts sort of plug into a 110 converter, right? There's distortions that occur. There's just a problem with doing that. And we also tend, I think, to reconcile that by relying on metaphors that refer to the infrastructure of memory and the infrastructure of AI in this kind of abstracted sense that AI is learning, AI is seeing, right? We use these human types of metaphors to make this reconciliation of these infrastructures compatible. But I think it's really important not to diminish the human memory in order to fit that structure. And when we do, if we do sort of say, all of this stuff can be sort of lumped up to a dataset, we see strange things start to emerge. And in this practice with, in this fellowship with Flickr, one of the things I started exploring was how this emerges, how what we might call ghosts in the dataset start to come through when we generate an image. Weirdly, the lights flickered before and it reminded me that in the essays you liken the prompt in a search being like a seance. What do you mean by that? I think it's this reactivation of memory that oftentimes is from this collection of information that no one's looked at, no one has evaluated, no one has curated. And as a result of that, a lot of... Problematic content is in that data. A lot of stuff gets mis-contextualized, even when archives do their best. Maybe now is a good opportunity to show one example of the work with the Flickr Foundation. There's a technology called stereo view images, came out centuries ago, but it had its peak in sort of the turn of the, from like the 1900s into the, sorry, yeah, the early 1900s. in the US. The idea is it's about a foot away from your face and it's two images that are sort of identical and they overlap and they create this illusion of depth. And this This type of media was really popular, like I said, early 1900s. One of the most widely distributed ways of getting an image. But it was also widely used to disseminate images of what the U.S. was up to in those days. And one of those things was the colonization of the Philippines. And so if we prompt, what you see here is actually a prompted image from Midjourney just for the word stereo view. And what we're seeing is this iconography, these images, these references to the colonization of the Philippines. It's that embedded. Maybe the next slide. So here's another one, right? And we can keep generating just the word stereo view and see these ghosts of colonization that are otherwise sort of not really front of mind, to be blunt about it, right, in the US in 2024, emerging simply through conjuring up this image of a media format. It's actually, maybe the next slide, Here you see an image for stereo view where even the form of the media is not there, right? That side-by-side image is gone. What is there, however, is this imagery of a girl's school. Next slide. So this is an actual image of a girl's school in a stereo view image. Now the reason that this happens, and I want to be very careful, I'm not blaming the institution, but the U.S. Library of Congress has hosted a large collection of stereo view images that happen to reflect this particular period of time. There's nothing wrong with doing that as an institution. But there is something strange that happens when it enters into the system. The Library of Congress, again, to be very clear, had no responsibility for this information coming into a data set. It is the collectors of the data set who kind of ignored that context. And as a result, when we ask for these images, we get this reference to this colonization, which is not the thing we would associate with a stereogram, stereo view image. And so I think this speaks powerfully to this idea of the severance of the infrastructure of memory and its sort of isolation from the infrastructures of AI and the care that needs to go into how we reconcile that, how we translate from one infrastructure to the other. Is that because of basically of the way that the institution would have labeled that data specifically at that time and would have been a long time ago maybe that they've made that and it would have just been labeled stereo view circa 1900s? So it's sorting this vast collection of stereo view images, all of which have the word stereo view in them. And the context of the labeling may vary widely, but the word stereo view is going to be very common. And so every single one of these images is going to be imprinted in the data set with an association to this idea of stereo view. And it also does speak to the very simple fact of the predominance of this media format to tell that story and to tell it in a very sort of glorified American colonizer's perspective, right? Now, in the archive, this is handled responsibly, but you strip it out of the archive and you recontextualize it and all of that nuance Dissolves literally into noise. That's how this these things are trains. They dissolve these things It's a noise and they associate the degradation of the image with the labels Which is also part of why the Henrietta Lacks piece is so noisy So that that's how it's how it's happening It is it is very much a technical process as well as a sort of philosophical thought experiment. It is it is both and We're going to talk more about noise in a second. We're going to jump into your work, Kartini, but I'm really interested in that idea of preserving the infrastructure of memory and emotion and the role and the obligation that archivists have to do that, to mitigate against the risk of automated decision-making. How do you do that? One of the things that I have come to realize is that so much of generative AI, whether we're talking about music, video, images, is a digital humanities project run in reverse. All of the things that we have used to label, caption, sort, categorize information into an archive so it could be found, referenced, made accessible, oftentimes with the best of intentions, oftentimes to be accessible to communities that need it, is run backwards and now those categories, those labels you've used are now prompts. And so this has to change the way that institutions think about what goes into those labels, what goes into those prompts. And I am really nervous, actually, that it makes organizations frightened of sharing. That, I think, is ultimately a tragedy of this AI data grab, is that if institutions start saying, well, this is an excuse to lock our collections up and make them less accessible, I hope that that's not the direction we go. But we do have to think about this common word of curation and care, right? This Latin root, which I'll mispronounce, but I believe it's curare, right? Curational and care come from the same root. And so this care really just has to be adapted to this new context, this new technology that we are suddenly surrounded by, whether we like it or not. Thank you. Thank you. Kartini, over to you. Tell us about your recent work, Sonic Mutations, at the Sydney Opera House. Yeah. So, Sonic Mutations was a project that came about with the Sydney Opera House. I met with Stu Buchanan, who's the head of digital and screen there, maybe like two years ago now. And we were talking a little bit about, you know, my history and experience with the machine learning tools for musicians and how that kind of kept coming back as a theme for me and what was currently happening with the rise of, you know, ChatGBT at the time. And we were really interested in maybe where the music space had been at the time. And he really helped to shape a bit of a provocation that was aligned at the time with the, I think it was the 50th anniversary of the Sydney Opera House itself. And so we're talking a lot about what does the future of music sound like? And we sort of drilled down into this idea of what would it look like or sound like if we created a tool that enabled artists to perform live with AI. Broadly speaking, But even since then, a lot of our process is sort of interrogating what does AI mean in any context, in imagery, in music, in archives, as I'm very cautious of these days that the term AI-powered is thrown around very loosely, especially, I've just been at South by Southwest Sydney for the last week, and the entire program is very AI-powered focused. So we ended up collaborating with some artists, Alexis Weaver and Rowan Savage, who also goes by the name Salvage with three Ls. And we started exploring that sort of provocation of what can we do? And so initially we worked with them on this early kind of iteration of the tool itself. And we found a open source tool called Refusion, which is actually a version of Stable Diffusion. It's sort of, you know, it's musical offspring of that, which is an image-based model. And it basically has the ability to take spectrograms and train on spectrograms and turn it into audio. And that was the only model we could find at the time that was able to do quite a quick turnaround and being able to serve the purpose of live audio output, really. What was the design process with the artists? The design process was always, and this is something that we've really stuck to since being at Google's Creative Lab, has always been about workshopping and building capacity and understanding around whatever technology it is. In this particular case, we worked with both the Sydney Opera House, we did a workshop with Alexis Rowan, some people from the marketing team, and some people from the digital team, and we spent a whole day together kind of exploring that context of what is AI in this context. So we also asked those questions, like just really tried to break it down, you know, what is data in this context? What does sonic data mean really? Is it audio? Is it your catalog? Is it, you know, your structural patterns and your genres? Is it your listening habits? things like that, so we spent a day doing that, and then our next step after that was sort of an iterative workshop, well, sort of iterative ideation process, but really it took us a few months to go from like what we call sparks, where we capture all our different ideas in just like a slide each, and then dwindled it down into what became Sonic Mutations, which was really just a generative live remix composition and performance piece using this tool. Yeah. Amazing. Working with new technology isn't a new thing for musicians, both in the creation of works but also the delivery of music. But there's been such a mixed response from the music industry to the introduction of AI to the industry, from artists across Australia and the US as well. I think recently APRA AMCOS, the peak representative body for musicians and writers and publishers in Australia, released a paper signed by many high-profile artists who were looking for more regulation of AI in the industry. They're scared about loss of income, loss of IP, misuse of their likeness. And the same in the States. There was really high-profile artists there as well. Billie Eilish, Stevie Wonder. Frank Sinatra's work, all signed letters, in a bit of fear, actually. But at the same time, you've got other artists who are all in, like Grimes, who's given up all her stems and her tracks and said, go for it, if you make a hit, let's go Halbsies. And Yacht, and of course, Brian Eno, who's all in as well and making some really interesting new sounds. What was the experience of the artists that you worked with on the performance and on doing this work? What was their perspective going in and coming out? I'm so glad you mentioned the APRA, because I was going to mention that as well. But it's a really interesting recent AI and music reports by APRA. I think it's like 87% of participants from that. specifically from the music industry. It's one of the biggest studies from Australia and New Zealand with the music industry. And I think 87% of participants said that they were concerned about how AI would impact the way that they would make a living from their work. But there was also about 60 something percent that said that they were kind of interested and curious and hopeful as well. It was interesting, both Alexis and Rowan, when we first met... ...they didn't have any particular precedent over AI. They weren't, you know, super AI curious people necessarily. So...but they were also super open-minded about it. And I know for Rowan, for example, he's a First Nations artist... ...and is very interested in Indigenous futurism in his practice. And then Alexis, her music is very electro-acoustic. explorations in her work. So they were quite fitting artists to dive in with. And I think it was really, yeah, doing this thing of like building capacity and understanding and agreeing on what we thought data was in this context that helped. But it's been quite positive, like they're very optimistic, cautiously optimistic, I think, about AI and very much on board with where we've come to now. Alexis out of the process had a model which she calls AI Alexis. She tried to make it a bit of a thing. And then Rowan in his particular piece, the way that we worked with the tool we designed by fine tuning the model on particular prompts for Rowan specifically, he fine tuned a prompt on field recordings of crows from his country. And in his performance, he progressively kind of changes the mix of the input to evolve his spoken word poetry into that of a crow. And we called it going full crow by the end, which sort of correlated to a visual aesthetic that was up on the screen as well as part of the performance. But yet the slide here is a bit of a behind the scenes of how both artists sort of use that tool and that interface that we designed. where in that live context where Alexis, you know, she would upload us some samples and then we had this sort of slider on the tool called denoising, which is really about how close to the live input and the prompt, the remixes that comes out and would generate it and then throw it into their door, their digital audio workstation like Ableton or Logic. and then play that into a composition, whereas, as I was saying with Rowan, it was about spoken word poetry and evolving his voice into that of a crow. Yeah. Eric, I hope you take going full crow back to New York with you. Yes, that's a new favourite phrase. I mean, how can you inspire artists and creators to embrace the opportunity? Well, currently, yeah, currently we're working on, so after the project Sonic Mutations sort of wrapped up, we were sort of like, we thought maybe we could really do something with this prototype now that we've built it. We've really used that and leveraged that to start thinking about what Sonic Mutations could look like, I guess, as a community and a platform and a product going forward. So we've been working on what is now called Coo Music. very much inspired. I was watching a lot of Catherine the Great on Stan at the time and I was very inspired by the female coup d'etat in that context. And I guess just like the history of in the music industry and I guess the arts around unfair compensation over time and particularly in the music industry, there is a bit of a problem with sample overuse. A lot of music libraries notoriously have like So we've sort of found a way to use this tool, I think, for good and we're hoping to inspire artists that there is a lot of power in what their sonic data is, and especially in this very late transformative era, how can we use this tool and interface to inspire artists to fine tune AI music models that might be ambiguously trained, although there are fairly trained models like on the rise at the moment, but for the ones that we can get our hands on currently, how can we use those and then use their data in a way to shape different outputs, basically. We have a little video, I have a little demo video, I think, here, actually, of what the current tool and prototype does. We're about to release a beta. If you want to pull that up, if it's possible. I think it was on slide 7. But basically it allows you to input a live input where you might sing like a little vocal melody in and then it will remix it with anything from like a prompt like violins or metal music or another artist's fine-tuned prompt. No audio. Maybe some audio. Oh yeah. I just realized this particular video misses the live recording in it, but the input is me singing going. And then there's a real good connection there between those outputs. And I've had to do that a lot in the last few days, demoing the tool at South By as well. So, because most people are not comfortable to sing into a microphone when they're put on the spot. Going full crow. Yes. So, what can the glamour sectors learn from artists working at the cutting edge as you are? The bleeding edge, let's go dangerous. The bleeding edge of these new technologies. Do you want to go? Eric? I think one of the great things about artists is the ability to think about their relationship to a tool differently than the way the tool makers oftentimes design it. And that's why it's really quite nice to see artists making tools. But from the glam sector perspective, it could be really, I think, refreshing to, and I get this a lot, so maybe it's a humble brag, but to come into a position and say like, okay, you've been given these metaphors, you're understanding this technology through this set of metaphors that Silicon Valley is talking to you about that this is a artificial intelligence, right, that it's learning, it's seeing, it's doing all this work for you, but actually it's very different from that. There's a lot of different ways of looking at these technologies, how they work, and explaining them to ourselves in ways that actually reveal potentials and possibilities instead of what I think obscures them, which is leaning on this idea that they are somehow people or that they have a right to learn the way a child does. Those, I think, are actually not helpful metaphors. And I think that the more metaphors we have, the more diverse those metaphors are, the better. And one way of getting to a greater diversity of metaphor and meaning making in these machines and these tools is to bring artists in who see it differently and tell a different story about the technology. I agree. I think, yeah, particularly with sonic mutations, I think it was such a privilege to be given one of the best things I think that could have ever happened to us was with what Stu Buchanan basically said to us at the time when quite early on that it's okay if this doesn't work perfectly. And if it's, and I think that's so important to the process of like having a bit of freedom in the space to experiment and for it not to be perfect. And maybe it's just around piloting. Piloting concepts and working with artists in collaboration to explore those ideas and creating the space for that. And be a bit brave, really, to do that. What's fantastic about the future for both of you? What excites you right now? What's inspiring? I've been a real fan of the stuff that Holly Herndon and Matt Dryhurst are doing. This includes a project to create a voluntary collection for artist data. So if you're an artist and you want to share your work online, you can build it as a dataset and sell it to someone who wants to train a model. As opposed to putting it on your website and having it given away to somebody who's going to build a model anyway, this allows you the opportunity to say, I am selling this data, this is my data, attribute it if you use it in your model, and also pay me. Because whenever you upload images, photographs, whatever to a website, to a social media site, you are building a data set. And that is unpaid labor. And so they are creating a platform where you can share this work with that knowledge and do what you want with it. And in the music space, they're also doing a really interesting project in the UK at the moment where they're collecting choral church choirs, the sound of them, and they're also trying to figure out, okay, how do we license this? Like, what are the rules around how we use this data? It's literally town by town, right, choirs of local singers, not necessarily professional musicians, just everyday people, and they're really going through the motions of trying to figure out how do we license this data set? And I think that's really powerful to be asking those questions, making really cool art, but also using that art as a reason to develop a framework for thinking about these questions into the future. So that's inspiring. Yeah, I would very much agree. I feel like what we're sort of exploring at the moment with Qoom Music is very much the idea of like, how can we license our data? How can we inspire people to create new revenue streams and new business models in the music industry, specifically around data sets? And just like the possibilities, I suppose, about how we might curate or help shape as communities or little sub communities, different kinds of models in different ways. I think that's a really interesting space. Despite, I mean I guess there is a really interesting thing that I think everyone is coming up to speed with slowly is this idea that all these models and all these data sets are sort of reflecting back to us our biases and perhaps that can be an opportunity to like not repeat. the negative histories or there's something, there is some sort of positive side of that or spin of that where I'm excited or feeling hopeful that that helps us look to the future in a positive way. Brilliant, thank you. I think we're almost at time, so please join me in thanking Eric and Kartini this morning. Thank you.
The National Film and Sound Archive of Australia acknowledges Australia’s Aboriginal and Torres Strait Islander peoples as the Traditional Custodians of the land on which we work and live and gives respect to their Elders both past and present.