Impressions after the event “Conversations About the Use of AI in Art Practice”
April 15, 2022
1 to 2:30 pm
Online

Written by Gizem Oktay

experimenta.l. Automata, coined after the collaboration between two research labs of ATEC, experimenta.l. and Creative Automata, had its first public event on using AI-based techniques in art practice. The event took place on April 15, 2022, and brought together a diverse set of presenters and participants in a lively discussion about state-of-the-art AI tools and how they are utilized in artistic spaces. 

Slide from Marcelo Rocha’s presentation

The group of presenters was comprised of four students, along with the directors of the research labs, Drs. Paul Fishwick and Christine Veras. The first to present was Marcelo Rocha, an undergraduate student with a major in Animation, with the presentation titled “Automated Motion Paintings and Animation Using Ebsynth.” The program introduced in his presentation is an interface used to style transfer a static image onto a moving image. Marcelo showed examples from his Capstone animation project, “A Decree From the Stars,” with captivating examples of how Ebsynth was used to style transfer. 

Ebsynth was also a part of the workshops given by the research assistant Gizem Oktay, which showed students at the Experimental Animation Lab how to use both AI-based and non-AI-based tools. These included Ebsynth and RunwayML, an interface that allows its users to train neural networks without needing coding knowledge. More information about these workshops can be found here

Slide from Nathan Shoeck’s presentation

The second presentation, titled “AI’s Perception in Art,” was done by Nathan Schoeck, a first-year Applied Cognition & Neuroscience MS student whose topic was a deep dive into two models — CLIP and VQGAN— that he used for his artworks. By comparing the way human memory works to that of a network, Nathan drew connective lines between the workings of these two ‘prediction machines’, namely the human brain and the neural network. 

Still from Jiatong Yao’s presentation

The third presentation of the event, titled “NeRFs Driven Art Practice”, was done by Jiatong (Tong) Yao, a first-year Computer Science MS student. Tong’s topic was on one of the latest neural network models, Neural Radiance Field (NeRF), which can create 3D objects based on a given text prompt. Through the narrative of what the model provided for her art practice, Tong described how the model works, along with other possibilities in the realm of 3D-object-based generation and how they can be used in real-time with body tracking technologies. 

Still from Gizem Oktay’s presentation

The last presentation was done by Gizem Oktay, research assistant of Dr. Veras and a part of the experimenta.l. Automata collaboration since its beginning. Inspired by her year-long research project titled Corporeal Crossings, Gizem’s presentation focused on using two neural network models, a text-to-image operator called CLIP and a style transfer model called StyleGAN2. The models were used to create hybrid bodies composed of human and animal parts. Gizem’s presentation included still images and animations made with these models and interactive examples of how the visual outputs of models could be activated by audience participation. 

After the presentations, participants were invited to join the discussion, ask questions, and consider possible future steps towards such conversations, processes, and future collaborations. We had a lively discussion about the philosophy of AI, how the neural network interprets abstract concepts like metaphor and perception, and what the models can provide the artists with in terms of affordances. Among the questions was whether the neural network was operating from an emotive place like humans do, which included responses from all student presenters regarding how they approached the agency of AI in the work they presented. Another topic touched upon was the issue of algorithmic bias and how artists can aid in the process of creating ethical and inclusive algorithms.

With over thirty participants, including academics from multiple disciplines, practitioners, and artists, “Conversations About the Use of AI in Art Practice” opened up a space for dialogue on how the use of artificial intelligence is in exchange with the artistic practice, questioning the role of human and the ever-increasing role of “intelligent” tools in the creative process, as well as our role in the process. The team of experimenta.l. Automata is excited to invite interested students and faculty in UT Dallas and beyond to make the area of AI Art more inclusive. 

We thank everyone who took the time to attend the event, and we hope to have more such initiatives and conversations in the future. 

Click below to watch the recording: