top of page

STORY DIFFUSION

AI can be used to emulate generational interpretation and sheds light on the complex interplay between human creativity and machine learning algorithms. This body of work embraces the relationship between contextual generational storytelling and its generated artificial emulation, and calls for continued exploration and refinement of AI models and techniques to better serve the needs and aspirations of artists while navigating ethical complexities with care and foresight.

Part I: The paintings

Narrative Driven Visualization

These paintings establish a visual art direction that unifies several perspectives of this family building to the decision to flee their home country for a new life. The works' close visual similarities share a common experience, however is told through two different perspectives, one father, and one daughter. However, the development of these paintings also illuminates a third generation of these narratives. As the next generation of this family, these works resemble the same narratives told in a uniquely interpreted perspective. These paintings, along with their artist, lack the period-accurate context and first hand experience that previous generations can recount from personal memory. Furthermore, these paintings are built on a contemporary context, with understanding of these experiences built only on memoirs and narratives passed down through family conversations. While the child generations can not completely understand the full experiences of the parent generation, each lens brings a new culmination of context from personal identity and culture, creating a new interpretation of this narrative. 

Take a Closer Look

Screenshot 2024-04-21 105835.png

Art Directable Training and Interpretation

Using Stable Diffusion's image generation program moved to a closed, local network, we can input author-created images and illustrations to train the art direction of a proprietary generative AI image. After 12 models were created in experimentation, the successive models are used to develop hundreds of illustrations which can be used to train the next generation. Each generation of AI model represents a generation of interpretation through visual narratives recognized through text prompts. In total, this project resulted in 32 AI models developed, with over 10,600 images produced and used. 

Screenshot 2024-04-21 110654.png

Animations / Interpretation in Further Narratives.

The AI was ultimately trained to develop and interpret environment HDRIs built from text context, which are automatically converted to 3D mesh projections. These 3D models can be imported to storyboarding and animation programs to allow characters to be created and animated inside, allowing an immersive experience of character animation, and the accessibility 3D workflow brings to animatic creation. 

The exploration of family experiences as war refugees provided a deeply personal narrative that served as the foundation for artistic exploration. Through oil paintings and digital illustrations, the artist captured the nuances of intergenerational trauma and the evolution of narratives within a family's heritage. The created body of work was inspired by personal narratives of family-shared experiences as war refugees, using art as a medium to preserve and interpret these experiences. Through oil paintings and digital illustrations, this body of work  captures the complexities of intergenerational trauma and the evolution of narratives within their family's heritage. The narrative-driven visualization process begins with paintings depicting experiences from two generations, culminating in a series of artworks that reflect the multi-generational journey of immigration and adaptation.

 

Drawing parallels between familial heritage and generational AI interpretation, The generative AI experiment explores how tools like Stable Diffusion can be used to emulate multi-generational narrative development while contextualizing the study within the broader landscape of generative AI's impact on creative industries, highlighting the importance of understanding its potential, limitations, and implications. By training AI models with datasets derived from original paintings, the study demonstrates the potential for AI-driven tools to augment artistic practices and explore new avenues of collaboration and innovation. However, ethical considerations surrounding authorship, ownership, and bias must be carefully addressed to ensure that AI-driven artistic collaborations uphold principles of transparency, accountability, and creative authenticity.

bottom of page