PhD Defense: Analysis and Synthesis of Repetitive Patterns in Digital Images

Talk
Peihan Tu
Time: 
04.03.2024 12:00 to 14:00
Location: 

IRB-5105

https://umd.zoom.us/j/2299144543
Repetitive patterns are ubiquitous in various fields, including textile design, digital art, web design, and graphic design, offering significant practical and creative benefits. This dissertation delves into the automated analysis and synthesis of these patterns within digital imagery. We focus on the replication of geometric structures in vector patterns and geometric analysis of raster patterns, highlighting the unique challenges and methodologies involved in each.Creating repetitive vector patterns, characterized by their intricate shapes, is notably challenging and laborious. This intricate process not only demands significant time investment but also requires a unique combination of creative insight and meticulous precision. Although computational methods have streamlined some aspects of manual creation, they predominantly cater to simple elements, leaving complex patterns less explored. To overcome this, we introduce a computational approach for synthesizing continuous structures in vector patterns from exemplars.This approach innovates on existing sample-based discrete element synthesis methods to consider not only sample positions (geometry) but also their connections (topology).Additionally, we present an example-based method to synthesize more general patterns with diverse shapes and structured local interactions, incorporating explicit clustering as part of neighborhood similarity and iterative sample optimization for more robust sample synthesis and pattern reconstruction.Conversely, raster textures, essential visual elements in both real images and computer-generated imagery, present distinct challenges in editing due to their pixel-based nature. Traditional texture editing, often a repetitive and tedious task, requires manual adjustments of textons—small, recurring patterns that define textures. Addressing this, we propose a novel, fully unsupervised method using a compositional neural model to represent textures. Each texton is modeled as a 2D Gaussian function, capturing its shape and detailed appearance. This discrete composition of Gaussian textons simplifies editing and enables the efficient synthesis of new textures through a generator network. Our method facilitates a broad spectrum of applications, from texture transfer and diversification to animation and direct manipulation, significantly advancing texture analysis, modeling, and editing techniques. This approach not only enhances the capability for creating visually appealing images with controllable textures but also opens up new avenues for exploration in digital imagery.