The installation employs data consisting of traditional ornaments. Using the latest state-of-the-art machine learning algorithms. Neural Network computing system that processed tens of thousands of archived ornamental images from museums and libraries. It creates new unlimited combinations based on their similarities. “Circular Repetition“ creating imaginary patterns that didn’t exist so far.
Culture is a source of inspiration and education, and that is why humanity has kept and preserved it. Society conserves culture by constantly repeating traditions, values, rules and rituals. However, society also understands the necessity of updating. Cultural development leads to optimizing processes by using new technologies. In a computer epoch, data becomes the crucial form of cultural communication instead of traditional narrative. Data is a complex of diverse information that is usually disorganized.
Repetition is the precious data that we hold from our perspective. Therefore, it is no longer an individual idea of something we perceive, it’s also the traces we leave behind. However, perhaps with machine intelligence we can redefine the concept of repetition.
The installation’s visual component imitates traditional Azerbaijani patterns. As a result, viewers see how AI (Artificial Intelligence) builds new alternatives. These alternatives have a synthetical nature that has nothing to do with the history of authentic ornament patterns. AI imitates and fakes a traditional learning process usually handed-down from generation to generation. By accessing the data of authentic ornament images, AI becomes an independent master that can invent new ideas to update culture.
Besides symbols, AI generates new concepts and meanings. It blurs the boundaries between reality and the fake. These actions further prove cultural development, as non-human intelligence replaces traditional craft tools.
Recent years have marked a totally new approach in the utility computing, generative methods, digital production, and somehow in new media in general. Here came Artificial Neural Networks – computing systems, developed and constructed on the principles of natural biological neural systems. Instead of processing some complex step-by-step algorithms, these systems process vast amounts of data with millions of simple operations, optimizing their internal parameters in accordance to those data. Such process is called “training” or “learning”, and it quite closely resembles the human studies with the gradual appropriation of the knowledge (hence another name of this technique – “Deep Learning”).
The fundamental power of Neural Networks as a computing tool is in its ability to find correlations and patterns in the very different types of data. So it’s quite reasonable that such method can also generate real visible patterns – from the basic modern geometry to the complex medieval filigree.
The most popular, promising and evolving Deep Learning architecture nowadays is called “GAN (Generative Adversarial Network)”. It consists of two cross-linked neural networks, one of which tries hard to create fake imagery and another one tries to guess if it’s fake or real. Both networks share their guesses with the opponent, so that both could learn from the process, operating better and better.
For our goal we took one of the best available GAN architectures – “ProGAN (Progressively Growing GAN)” by “Nvidia”. It progressively learns the imagery from the dataset, going from the lowest downscaled resolution (4×4) to the full scale pictures, ensuring the proper attention to details on all levels. We have fed it with the dataset – decent amount (tens of thousands) of the authentic patterns (again, from simple ones to quite excessive combinations). After about a week of training process, the system was able to generate new patterns – reminiscent of the original imagery, yet different and ever-changing.
The system’s output – realtime generated visuals – was mapped to the solid LED ring with complementary information panel, illustrating the whole process happening inside the server. Imagery creation is also integrated with the sound (generated realtime as well) to emphasize living nature of the visuals.