Facilitator: Gizem Oktay

Description: Hello! This is the first of three blog posts about a series of workshops organized by me (Gizem Oktay) for experimenta.l. to introduce lab members to numerous softwares that use neural network models to produce images and videos.

For the first workshop, I introduced Ebsynth, a graphical user interface (GUI) that allows the user to transform their animation by painting over a single frame. Ebsynth takes the painted over frame as a reference to mirror the effects to all frames, cutting down the time and effort needed to paint over every frame in any time-based media.

Ebsynth

How does it work?

Participants started by turning their time-based media to a PNG sequence using After Effects. Exported PNG sequence was then moved into a folder named “video”, containing all the frames. Later, participants chose keyframes to paint over to be used as reference by Ebsynth. Painted-over keyframes were put in a separate folder named “keyframes”.

After keyframes were ready, both the “video” and “keyframes” folders were dragged and dropped to Ebsynth. In order for Ebsynth to recognize which keyframe was painted over, the number of keyframe was referenced in the Keyframe section of the interface.

Results

Gerardo Reyes

Danny Laboda

Source: LEGO Movie

Elham Doust-Haghighi

Source: Shawshank Redemption

Dr. Christine Veras

Source: Singing in the Rain

Martin Namwook Cho

Source: Animation by Martin

Gizem Oktay

Source: Video by Gizem

Dates: TBA
Semester: Fall 2021 – 1stSession (09/21)
2ndSession (09/28)

Participants: Week 1: Martin Cho, Bryce Alexander Sheehan, Danny Elizabeth Laboda, Dr. Christine Veras, Eesha Muddasani, Xochitl Juarez.

Next Post: Using RunwayML to create an interpolation of characters drawn by participants