Refik Anadol: art in a latent space

Pau Waelder

Refik Anadol. Unsupervised: Machine Hallucinations MoMA (2022)
Museum of Modern Art, New York

Refik Anadol (b. 1985, Istanbul, Türkiye) is a media artist and lecturer, whose meteoric career has taken him from creating video mappings on building façades in several European cities, to being one of the first artists in residence at Google’s Artists and Machine Intelligence Program, the founder and director of Refik Anadol Studio RAS LAB in Los Angeles, a lecturer for UCLA’s Department of Design Media Arts, and a successful artist with global recognition in the contemporary art world. In just 15 years, Anadol has amassed numerous awards and presented his site-specific audio/visual performances at iconic museums and events such as the 17th International Architecture Exhibition – La Biennale di Venezia, Walt Disney Concert Hall, Centre Pompidou, Daejeon Museum of Art, Art Basel, Ars Electronica Festival, and the Istanbul Design Biennial, among many others.

He works with a large team of designers, architects, data scientists, and researchers from 10 different countries and has partnered with teams at Microsoft, Google, Nvidia, Intel, IBM, Panasonic, JPL/NASA, Siemens, Epson, MIT, Harvard, UCLA, Stanford University, and UCSF, to apply the most innovative technologies to his body of work. He is represented by bitforms gallery in New York. 

bitforms is participating in the Art SG art fair in Singapore from 12 to 15 January 2023, presenting a selection of generative artworks by Refik Anadol. Niio supports the exhibition as technical partner, in collaboration with SAMSUNG, and is proud to present the artcast Refik Anadol: Pacific Ocean, which features excerpts from three pieces by the Turkish artist. The following article offers a brief introduction to the main aspects of Refik’s work.

Refik Anadol. Unsupervised — Machine Hallucinations — MoMA Dreams — F
Image sold on Feral File as NFT. 100 editions, 1 AP

Building a latent space

A trailblazing artist in the field of art and artificial intelligence, Refik Anadol uses large amounts of data and machine learning techniques to create his generative artworks and site-specific installations. His creative process often implies the creation of a data set from an archive of images, sounds, and documents or from measurements taken by sensors, radars, and other devices. The data set feeds a series of machine learning processes that generate an endless succession of audio-visual compositions, which can fill a large screen, a whole room, or the façade of a building. 

At the heart of the machine learning models that transform the original data into something else lies what is called a “latent space,” in which clusters of items are formed from similarities between them, which give rise to a set of variables. The latent space is therefore a space of possibilities, somewhat unpredictable, that contributes to shaping the final outcome. In Refik’s work, it is not only part of the machine learning model but also a concept that helps understand his generative pieces and installations as spaces in which creation is constantly exploring its latent qualities. Spaces in which the artwork is never finished. 

His recent installation at the Museum of Modern Art in New York, Unsupervised: Machine Hallucinations MoMA (2022), clearly exemplifies the conception of the artwork as a latent space. Using the public metadata of The Museum of Modern Art’s collection, which comprises more than 130,000 pieces including paintings, drawings, photographs, and video games, the artist and his studio created a series of artworks that result from the interpretation of this data by means of a machine learning algorithm. The initial series was sold as NFTs in the exhibition Unsupervised on the online platform Feral File, the project being further expanded into the generative artwork installed at MoMA’s lobby. Casey Reas, artist and co-founder of Feral File, aptly described the artwork in terms of its latency: “What I find really interesting about Refik’s project with MoMA’s dataset, with your collection, is that it speculates about possible images that could have been made, but that were never made before” [1]. The artwork can thus be seen as a space of possibilities, but also as a simulated environment that becomes particularly meaningful in the context of the building that houses it.

Casey Reas: “What I find really interesting about Refik’s project with MoMA’s dataset is that it speculates about possible images that could have been made, but that were never made before”

Refik Anadol Studio. WDCH Dreams, 2018.

The room as Merzbau

Architecture, and more generally a real or simulated three-dimensional space as a container, are key elements of Refik’s work. Artworks such as WDCH Dreams (2018) or Seoul Haemong (2019) use the exterior surfaces of the Walt Disney Concert Hall in Los Angeles and the Dongdaemun Design Plaza in Seoul as canvases, while Infinity Room (2015) and Pladis: Data Universe (2018) are conceived by the artist as “Temporary Immersive Environments.” The artist stresses this connection frequently in his interviews: “I’m interested in exploring the architectural domain as deeply as I can,” he has stated recently. “All my art works tend to have a physical connection to public space” [2]. However, the architectural space is not conceived in terms of static shapes and volumes, but as something fluid and malleable, a work in progress. 

Kurt Schwitters’ celebrated Merzbau installations from the 1920s and 1930s come to mind as an illustrative example of Refik’s conception of space. Schwitters began to alter the space of his studio in Hannover by putting together small artworks, found objects, and debris into structures that he would glue and fix with plaster, building columns and shapes that protruded from the walls. The sculpture was never completed, the artist always kept adding elements and reshaping the space [3]. In a similar way, the space occupied by Refik’s artworks is permanently reshaped through a process that stems from an accumulation of found materials, data that loses its original shape and merges into something new. In the Temporary Immersive Environment series, he specifically seeks to immerse the viewer in a “non-physical world” that questions their perception of space and their own presence [4]. The room is expanded and multiplied through optical effects, with the aim of creating a viewing experience that goes beyond staring at a flat projection.

Refik Anadol: “I’m interested in exploring the architectural domain as deeply as I can. All my art works tend to have a physical connection to public space.”

This conception of space as integrated into a Gesamtkunstwerk, the “total work of art” that has been the aspiration of opera composers, architects and filmmakers, is not, however, the only connection with architecture in Refik’s work. Interestingly, while Schwitters sought to merge all of his artistic practice under one term (Merz), erasing distinctions between painting, sculpture, and architecture, Anadol describes some of his generative art works as “data paintings” and “data sculptures.” These references to classical formats speak of a different type of space, confined within the limits of a screen or a wall, which nevertheless intervenes in the surrounding space by means of a trompe-l’oeil effect that creates the impression of three-dimensional shapes pulsating beneath and beyond a solid, thick frame. Artworks such as Virtual depictions: San Francisco (2015), displayed on an L-shaped media wall inside the main lobby of the 350 Mission building in San Francisco, seek to create an imaginary space that stands out spectacularly, but at the same time embeds itself into the surrounding architecture. The connection between the artwork and its location, though, is not only expressed in terms of how the screen is placed on the wall, but also in the data that gives meaning to the fluid elements that inhabit the virtual space.

Refik Anadol Studio. Future of the City, 2020.

Data is not just a bunch of numbers

Coming back to the concept of latent space within machine learning models, it is important to remember that Refik Anadol’s artworks do not only have an aesthetic dimension, as colorful shapes in fluid transitions or enormous mosaics of distinct elements, but also a conceptual dimension, expressed by the data that feeds the whole process leading to the site-specific installations and performances. Speaking about his project Quantum Memories (2020), the artist states the importance of this data and the meaning it conveys: 

“For me, data is not just a bunch of numbers. For me, data is actually a memory. From that perspective, I’m always looking for what kind of collective memory that we are holding as humanity, and how can we use these memories and turn them into a pigment or a sculpture that represents who we are as humanity.” [5]

Conceiving data as memory resonates with his ongoing work with all kinds of archives, from the 1,700,000 documents found in the SALT Research collections to the 587,763 image files, 1,880 video files, 1,483 metadata files, and 17,773 audio files in the Los Angeles Philharmonic Orchestra’s digital archives. Every bit of information in these files has its own history and meaning, and has a certain association with another file, enabling the clusters and variables that will emerge in the latent space. The artworks may appear as abstract compositions or massive collages, but they are actually visual representations of underlying stories and invisible structures. As the artist puts it, they aim to “make visible the invisible world of data that surrounds us” [6]. This statement may lead to considering Refik’s work as a form of data visualization, but it goes well beyond this task, into what media theorist Lev Manovich has described as “represent[ing] the personal subjective experience of a person living in a data society […] including its fundamental new dimension of being «immersed in data»” [7].

Lev Manovich: “The real challenge of data art is how to represent the personal subjective experience of a person living in a data society.”

Some of Refik’s installations directly address this condition of being immersed in data by means of projections that surround the viewer with visualizations of the data collected from archives, or in real time from sensors and other sources, as in Latent Being (2019) or Future of the City (2020). Others present data that relates to environmental systems that we usually ignore but that have a profound impact on our planet, and therefore in our lives. This is the case of Pacific Ocean (2022), the series presented by bitforms at Art SG in Singapore and on Niio as an artcast featuring three video excerpts.  

bitforms gallery booth at ArtSG Singapore presenting a series of artworks by Refik Anadol. Photo courtesy of bitforms.

Collecting data from High Frequency Radars (HFR) located in the Pacific coast of the United States, the artist has created a series of visualizations of ocean currents that seems abstract and realistic at the same time: the ebbs and flows of granular elements in shades ranging from dark blue to emerald green clearly evoke the surface of a raging sea, but they are also somehow unreal, impossibly merging numerous currents from different directions in beautifully chaotic, and even violent, clashes. HFRs are used to measure ocean currents and understand their impact and response to climate processes. The data collected from networks of radars in coastal zones around the world is crucial to protect the marine ecosystem and predict changes that will affect life on our planet. Seen from this perspective, the artworks acquire a somewhat unsettling tone and inspire an awareness of the ecosystems that we so often ignore, putting into question our anthropocentric view of the world.

Refik Anadol. Pacific Ocean A, 2022

Machine dreaming

Anthropocentrism and our inability to understand the agency of non-human entities and systems around us are underlying subjects in most approaches to art created with artificial intelligence. The perennial question of whether it is the artist of the machine that creates the artwork is still debated after 60 years of algorithmic art, now reinforced by the spectacular achievements of machine learning models in producing realistic images and coherent texts. In Refik Anadol’s work, the use of artificially intelligent systems leads to two interesting aspects of artistic creation: the notions of control and authorship. 

Terms like “machine learning,” “supervised learning,” “reinforcement learning,” and “training model” speak of the intention to use artificial intelligence as a tool to obtain predictable results, in which the machine is meant to produce a specific output. This perception of the machine as a mere instrument, fully controlled by a human, contradicts the way artists have used generative algorithms and AI systems to create their artworks. Nowadays, artists working with artificial intelligence understand machine learning as a way of exploring post-anthropocentric creativity, therefore using AI to reach beyond the confines of human imagination and let the machine bring in the unexpected, the incongruous, the unsettling, and even the impossible. In Refik’s work this approach is made clear in the use of machine learning models to create “dreams” and “hallucinations.” He has described AI as “a thinking brush, a brush that can think, that can remember, and that can dream.” This statement implies an interesting balance between letting the system loose and keeping it under control. In Archive Dreaming (2017), the installation is allowed to “dream” when a viewer is not interacting with it, so that this state is interrupted when a human takes control. In other installations, such as Machine Hallucination (2019), the system can create its own associations and reimaginings of the contents of a very precise dataset, so that its “unconscious” is nevertheless under a certain level of control. 

Refik Anadol: “the most important thing for me is creating a thinking brush, a brush that can think, that can remember, and that can dream.”

The question of authorship stems from the perceived control over the final output: if the artist had no control over it, is he the author of the artwork? Interestingly, while the Dadaists and Surrealists already integrated randomness into their artistic practices and many other artists have incorporated unpredictable processes or external agents into their work, authorship tends to be more fiercely contested when a computer is involved. Refik Anadol’s authorship is nevertheless palpable in the aesthetic and conceptual foundations of his work, which remain consistent throughout his career despite considering himself part of a large team of experts and working with increasingly complex AI technologies. He conceives the process as a collaboration, both when dealing with software and hardware and when teaming up with designers, coders, and researchers to develop a project. There are, however, crucial moments when decisions are made, and these are the moments when the artist states his authorship:

“There’s a collaboration between machine and human. With the same data, we can generate infinite versions of the same sculpture, but choosing this moment, and creating this moment in time and space, is the moment of creation.” [8]

Out of infinite possibilities, making a choice that determines the next step in the process and shapes the final output is a prerogative of the artist, who is finally the author of the artwork that emerges from a latent space.

Notes

[1] Refik Anadol, Casey Reas, Michelle Kuo, and Paola Antonelli. Modern Dream: How Refik Anadol Is Using Machine Learning and NFTs to Interpret MoMA’s Collection. MoMA | Magazine, November 15, 2021.

[2] Dorian Batycka. Digital Art Star Refik Anadol’s First Supporters Were in the Tech World. All of a Sudden, His Work Has Become White-Hot at Auction, Too. Artnet, May 18, 2022.

[3] Gwendolen Webster. Kurt Schwitters’ Merzbau. A dissertation presented to the Open University, Milton Keynes, 2007.

[4] Refik Anadol. Liminal Room. Refik Anadol Studio, November 19, 2015.

[5] Claudia Pelosi. Machine intelligence as a narrative tool in experiential art – Interview with Refik Anadol. Designwanted, December 19, 2020.

[6] Refik Anadol. Virtual Depictions: San Francisco. Refik Anadol Studio, October 1, 2015.

[7] Lev Manovich. Data Visualization as New Abstraction and Anti-Sublime. Manovich.net, 2002.

[8] Refik Anadol, Casey Reas, Michelle Kuo, and Paola Antonelli. Modern Dream. op.cit.

Driessens & Verstappen: driven by process, shaped by time

Pau Waelder

Erwin Driessens and Maria Verstappen have worked together since 1990 in the creation of process-based artworks using software, robotics, film, photography, sculpture, 3D scanning, and many other analog and digital techniques, as well as enabling, manipulating, simulating or documenting physical, chemical and biological processes, including plant growth. Following the presentation of their artcast The Kennemer Dunes, curated by DAM Projects for Niio, we have discussed the main concepts that drive their artistic research and the processes behind some of their most influential artworks.

Kennemerduinen 2010, scene E, 2011

Process is a key concept in your work, that is carried out automatically by programmed machines, spontaneously occurring in a natural environment, or happening through physical and chemical reactions. Why is creating, enabling or documenting processes so fundamental to your work?

Not all generative processes are equally interesting to us. We are mainly focusing on decentralized processes, the so called bottom-up processes. In these processes the patterns are not defined by a central authority but by local interactions between a vast amount of  decentralized components. Examples for this are bird flocks, ant colonies, market economies, ecosystems or immune systems.When we study the landscape, what we see are the interactions of the elements in the ecosystem that react, adapt, and evolve over time. And that is also exactly what we try to model when we work with computers: the interactions of many small elements that together create a coherent global structure. We try to express that in the generative systems that we build. For us, this way of working implies another role of the artist. In the tradition of art, artists tend to work top-down, taking a piece of material and then shaping it to match an idea they had on their mind. We’d rather take a step back and see how the material can organize itself, albeit creating certain preconditions. As artists, we create a process that can make something by itself or react on the stages of development, so that it is the system that shapes the product instead of us determining how the material has to be formed. So there are different angles on why we are so interested in process, self organization, and evolution. 

“As artists, we create a process that can make something by itself, so that it is the system that shapes the product instead of us determining how the material has to be formed.”

Time is also an important aspect in these processes, of course. A landscape has many timescales: there are things that take ages to form, while others belong to a shorter time scale, like the seasons and the flowering. So there is this relationship between the different timescales that make it hard to understand exactly what has happened and why it is exactly like that. But when we look at the landscape, we feel the natural intertwining of all those small and big events that have led to the big picture that we see in front of us. And I think that’s why landscape, as a genre, has such a long history in art, because these inimitable processes, which take place differently in every place on earth, constantly evoke new aesthetic experiences in us.

Kennemerduinen 2010, scene H, 2011

In relation to the factor of time in your work, in The Kennemer Dunes the process is sped up, but still shown at a slow pace. What do you find most interesting about this slowness?

In the Landscape Films (2001-2010), we create an acceleration by the compression of time. We decided to do this because we experience the landscape at a given moment in time and we cannot predict or remember exactly how it looks in another season. We chose to show the series of still images in the form of a slow, fluent movie of around 9 minutes to enhance our perception of the slow, but powerful seasonal transformations. What we did here, then, is to take a picture from the same place on the same time of the day during different days over the course of a year. This gave us the opportunity to notice small things one would usually not pay attention to, the subtle changes in the landscape that happen at a pace that is the pace of nature and not humans. 

What we created is related to time-lapse animation techniques, but we decided not to simply put all images one after another, because that would generate a very hectic activity, with clouds passing by quickly and plants nervously growing towards the sunlight. In our view this would not support the landscape experience, so instead we chose very few images, around 52, and added a 10-second transition between them. The  transition between each photo is not a proper representation of what has happened  there and then, because it is just interweaving the pixels of one picture to the other. So it is not accurate as a document, but as an experience it is more accurate, because it keeps the quietness of the experience of contemplating the landscape.

Morphoteque #15 (2011). 27 elements, 1:1 copies of peppers. Plaster, acrylic paint.

A third outstanding aspect of your work is that of categorization and collection, as is made evident in the Morphoteque series or in Herbarium Vivum. What can you tell me about these artworks?

In these works, where we deal with static forms, particularly in the Morphotèque series, we always have a collection of objects that are expressions from a certain process and then we want to show the variety of the different outcomes. For instance, the Vegetables Collections (1994-2011) consist of rejected vegetables that have been collected by us from groceries and markets, and then cast as a sculpture, in order to preserve them, as they will obviously decay. We could have taken a photograph, but since the work is about morphology, we needed to keep the three-dimensional form rather than just an image. This work comments on the fact that, in our industrial world, we want our food to be produced in perfect and identical shapes. This is convenient for the machines that harvest and process them, but it is also the result of an aesthetic decision. But of course the plant growing the vegetable does not follow these principles, so it can produce asymmetrical or “abnormal” vegetables, which taste  the same as the “perfect”-looking ones, but nevertheless are put apart and used for cattle fodder or just thrown away. 

By collecting and preserving these irregular specimens, we show the wide variety of possible growths within a particular plant species. And that they are visually more rich-than the symmetrical and straight forms that we normally get to see in the supermarket. This type of work also gives us an opportunity to talk about processes that you cannot carry out in any museum space or in an art space. You cannot show the growth of a pepper, but each selected shape refers to an individual growth process, while the collection as a whole also shows the typical similarities.

Solid Spaces, 2013. 3D print in acrylic, approx. W.35 x D.25 x H.15 cm.

What drives you to create physical objects out of algorithmic processes (as in Accretor) and real space mappings (as in Solid Spaces)? What does the physicality of sculpture bring to your work?

In Solid Spaces (2013), particularly, there was an interesting connection between the process, the space, and the outcome. We had the 3D scanner working inside the church, we displayed two  sculptures that were made from previous scans of the interior of the church, and there was of course the architectural space of the church itself. People could see all of this at once and relate the objects with the space and the process of production. One thing we like about 3D printed objects is that we can create them by letting the machine look at something in the real world, an existing church for instance, but it can also be a completely virtual object, existing in a digital space. In the latter, the object that has been generated using generative software can be so complex and detailed that it might be difficult for the 3D printer to produce it. 

Sandbox, 2009. 245 x 122 x 176 cm. Wood, lacquer, metal, fans, sand, electronics.

The Kennemer Dunes can be connected with your diorama artworks of that time, Sandbox and Hot Pool, which also show a slowly evolving landscape, although through different means. Which connections would you make between these different types of landscapes?

All these works relate to our fascination with decentralized processes. What we did in Sandbox (2009) and Hot Pool (2010) is that we reduced all the elements that are in the landscape to three things: the box itself, which hosts the diorama, the wind or heat, and the particles of sand or wax. In Sandbox we create artificial winds using 55 individual fans placed on the roof of the box, with a software program that controls them. However, the result is not a pre-planned choreography, but there is an unpredictable process involved that turns on and off the fans. Of course, the wind shapes the dunes, but in turn the dunes change the direction of the wind.here is a complex interaction between the sand and the wind that is less deterministic than one might imagine. The geometry of the box causes even more complex turbulences, so in making these seemingly simple miniature landscapes, we realized that they are not so easy to understand and predict. If you change one little thing, it has an influence on everything, even in this very small secluded world. This is also something that we discovered working with software: when you change one of the many parameters a little bit, it can have a really dramatic effect on the whole. And that’s exactly something that we would like to communicate with our work: when you change a little thing in a complex system, when you take out one species, for example, one plant, or you change the temperature just one degree, everything changes and often in an unpredictable way. 

“We, as human beings, have to be more in balance with the ecosystem that we are in, and we should be humble when we interfere in systems that have evolved over many years”

Most things in the world are part of a complex system. So we, as human beings, have to be more in balance with the ecosystem that we are intertwined in. And we should be humble when we want to interfere in existing systems that are in balance, or have evolved over many, many, many, many years. We think we understand the system and that we can control what will happen when we change it. But actually, we always create a reduced model of the system and we let out some small things that we think are not important. And then it turns out that it’s this very small thing that you did overlook that is very influential in the end. 


E-volver, 2006. 4 breeding units with displays, 5 prints on canvas 600 x 300 cm. Permanent installation, interactive software. Research Labs, Medical Center Leiden University. Commissioned by LUMC Leiden and SKOR Amsterdam.

Works like E-volver and Breed deal with artificial evolution programs. How would you compare the processes involved in these computer simulations with your work with natural processes, either observed (Landscape Films, Pareidolia) or manipulated (Tschumi Tulips, Herbarium Vivum)?

We are interested in evolutionary processes as a kind of bottom up, decentralized process. Evolution is difficult to observe in the real world because adaptation to the environment and the passing of information to the next generation is rather indirect and it occurs  in small steps. But if you manage to model this slow and gradual process in the computer, it suddenly becomes observable, largely due to the acceleration of time (like in the landscape films). So in recent years we have set up a number of projects in which we have used evolution as a step-by-step development of an artwork, but also as a way of not completely controlling the results (due to the complex feedback loops involved).In Breed (1995-2007), for instance, the  process of mutation and selection is completely automatized, there is no human intervention. The artificial evolution takes place completely in itself, because the fitness score is determined by objective and measurable properties of the shape: the form that is generated inside this virtual environment should be structurally correct and be able to be materialized as a real object.  In E-volver (2006), there is human intervention involved,  since the mutations and variations of the animations are influenced by the subjective preferences of the people that interact with the work E-volver was made for the Research Labs of the LUMC in Leiden, where scientists and students in human genetics can grow abstract, colorful animations on four breeding units via a touch screen. It’s there now for I think 16 years, and it’s still working. It is always creating something new, and people can see that they have an influence on the outcome of the program, but it is more of a reactive intervention than a  creative one. E-volver involves an unusual collaboration between man and machine, providing a breeding machine on the one hand and a human “gardener” on the other. The combination of human and machine properties leads to results that neither could have created alone.

The outcomes of these artificial evolution programs can be connected with the Vegetable Collections in the sense that they also show how the industry speeds up evolution towards the genetic code that produces a set of desired outcomes, such as round potatoes and straight carrots, while what we want is to show the diversity in these morphological processes. We are equally interested in showing both the results of this virtual growth process in terms of diversity and detail, and the industrial production process that is automated from design to execution. Our approach shows that technological manufacturing processes do not necessarily have to lead to standardization, control, simplism and homogeneity, but to the contrary. When we started these projects in the 1990s, people were not used to computers as an artistic medium, and we had to explain that the artworks were generated in the digital realm, with digital processes, but now people understand that this is something that is created artificially.


Pareidolia, 2019. Robotics, microscope, camera, perspex, wood, metal, sea sand, screen 50 inch, black coated metal housing. Commissioned by SEA Science Encounters Art.

In your recent works, Pareidolia and Spotter, the task of observing nature is carried out by a machine through cameras, face detection software and machine learning models. It seems that this leads to a fully automated and autopoietic system, is that what you are looking for? Which possibilities do you see in machine learning for your future artistic projects?

We started working with neural networks some 10 or 15 years ago, but back then the computer processing speed was so slow that you could only do something very simple, and then it would take days before you could see the output. So it was very limited, but later on, when it became more achievable, we dived into it. However, we are reluctant to further elaborate on it, because artificial neural networks tend to take on an aesthetic that comes from the system itself and therefore all the artworks generated by these techniques look more or less similar. And it’s also very hard to understand how it works, beyond the fact that you can influence the training of the machine learning program by selecting input images and also some other training parameters. But what it has brought us so far is not very satisfying. Certainly now, with programs such as DALL-E or Midjourney, there are interesting possibilities to explore. These are very complex systems based on enormous amounts of data, and it can only be run by big companies and universities. Everyone can actually rent the software as an online service. As artists we are interested in building the systems we work with, not just using them to obtain specific results. So for us there is little to gain with these text-to-image generation systems. 

“We do not want to work with a big black box and wait for something to come out of it, without understanding anything about it. We want to build the system we are working with.”

The relation between process and result must also take place on the level of creating the system. We do not want to work with a big black box and wait for something to come out of it, without understanding anything about it. Although the systems that we build also are hard to fathom, in the end, we do have a very satisfying understanding. It’s a deeper understanding of what you cannot control. For instance, in Pareidolia (2021) we created a robot that uses machine vision and face detection to identify human faces in the texture of grains of sand. We built the face recognition program ourselves so that it would work on sand particles rather than the usual application of such software. Although it is hard to understand how the artificial brain learns to distinguish a face from something that is not a face, it was very satisfying to build the software based on our own database with tens of thousands of images. And then to see it applied to sand, whose morphology is really rich but too small for us humans to perceive. If you think that every sand particle in the world has a unique shape, then you can imagine a gigantic amount of sculptures that are right there under our feet. Applying machine learning to our own face detection software has so far been more interesting and satisfying than the potential of generative neural networks (GANs), yet another type of machine learning. But you never know, sometimes it can take quite some time before you are able to transform and internalize the possibilities opened by a new technology and use it in a personal and original way. 

Fabio Catapano: the beauty of simplicity

Pau Waelder

Fabio Catapano is an Italian digital artist and designer who works with code, CGI, and motion. Encouraged by the possibilities that the NFT market has opened to digital artists, he is developing a growing body of work inspired by Japanese aesthetics and creating generative art that moves away from strict geometry and explores the poetic side of creative coding. On the occasion of his solo artcast A Theory of Color, we had a conversation about his creative process and his views on the future of digital art.

Fabio Catapano. Colorem 221201, 2022

What took you to create your artworks using generative algorithms and how would you describe your creative process?

It was the result of a series of choices. When I was younger, I worked for a long time as a VJ making visuals for clubs and musicians. In that process, you need to create a lot of video content, and I used a software called Quartz Composer, which is pretty much one of the first node-based generative system software programs. Besides my work as a VJ, I have always been passionate about programming languages and I learned some Visual Basic as a hobby. So I had both the interest and the motivation to use this software and explore the creative possibilities of generative algorithms. Since what I did is write the code and then the system would generate the outcome, I found it fascinating to ask myself who is the creator, me or the machine? I feel that we are co-creators, and the software is not just a tool, it is something else.

“I take cues from the way software developers think and collaborate, how they create iterations and updates of the same program.”

The initial idea for an artwork can originate in a shape, the feeling of motion, or a texture, colors, or the combination of two or more elements together. The process in itself is very, very experimental, a form of research in which every outcome is a good outcome. How the project develops is very spontaneous: for instance, I started two years ago with the series Colorem and I wasn’t expecting to create so many pieces. But I ended up creating day after day a different iteration of the same system in a way that felt as a journal of the whole process. I take cues from the way software developers think and collaborate, how they create iterations and updates of the same program. This is why the artwork titles include a reference number that indicates the date of creation and are therefore similar to the versions in a computer program. 

Working in iterations. Diagram by Fabio Catapano.

Every day there is a different outcome and a different exploration, that may be driven by a series of colors, or shapes, or something that I did before. Sometimes I want something that is a bit more grainy, or a bit more clean. But none of those, in my opinion, are the correct answers. They are just moments in time, part of an exploration. That’s pretty much how I started to work with generative art. 

Ideas lead to other ideas. Diagram by Fabio Catapano.

Color plays an important role in your latest series of works. This is an element that is crucial both to designers and visual artists. How do you work with color in the different facets of your professional work? What led you to make it a central part of your artistic research?

It’s funny, because many years ago –I was 17 back then– when I started to create digital art with Photoshop and other programs, it was very colorful. After that, I discovered generative art, and I shifted to black and white. I did so because I was more focused on learning the system and how to create genuine art. So I was more interested in how to create shapes and decided to remove the colors from the equation, and everything became black and white. But then I realized that there was nothing really creative about it. Many other generative artists at that time were creating very geometrical, black and white art that, to me, looks only like a lazy version of a work by Bridget Riley. So I was learning but it felt like I was bringing nothing new to the conversation. 

That’s when I started to shift to colors. I also did so because I wanted to do the opposite of what you expect from computer art, very geometric and strict, with shapes but not colors. I wanted to show that a computer can dream. So I created these shapes that are fluid and can move from one color to another. Also at that time I became interested in the Japanese concept of wabi-sabi, which deals with appreciating the simplicity, imperfection, and mutability of things. I took inspiration from the book WA: The Essence of Japanese Design by Rossella Menegazzo and Stefania Piotti, which shows how Japanese artists such as Takeshi Hara or Koichi Ogawa, among many others, manage to bring such quality in the designs they create. I was also inspired by the Polish artist Wojciech Fangor. I love the way these artists deal with simplicity, structure, and color. 

Japanese inspiration. Images collected by Fabio Catapano.

I also want to show that generative art can be something else, not just the geometrical art that is usually represented by the cyberpunk community. Generative art does not need to be futuristic, it can be something else: it can be white, it can be slow, it can be dreamy… Slowness is also important to my work because nowadays everything goes very fast in our digital lives, social media promotes content that grabs attention in the first three seconds, and I intentionally try to go in the opposite direction, towards a calm and slow contemplation.

“I wanted to do the opposite of what you expect from computer art, very geometric and strict, with shapes but not colors. I wanted to show that a computer can dream.”

While you work with generative algorithms, the outputs of your work are usually still images, videos, and prints. How do you work with these different formats? What makes you choose which will be the final shape of a particular piece?

I have released only one project as a software, Origami, that generated a new output every time it was minted, in a limited edition. This was on (fx)hash, last June. I have never released an artwork as a software that someone can run on the computer, mostly because I find it complicated to explain and distribute. However, I think that, for instance, Colorem as work shouldn’t be a video, it should be software. Because the idea is that it can run there and just constantly change and never be the same. But that’s pretty much true for any generative artwork. So if one day I find a way to distribute those ideas through software, I will be happy to explore further and introduce a new layer of variability and new layer of randomness that is informed by an external factor. I would like the artwork to be detached from me at some point. 

Creating with a computer. Diagram by Fabio Catapano.

In my work I try to think in a more fluid way where I don’t care much about, for instance, the ratio, because ideally with a few clicks I can change the format. And if I work in a print on paper, then I choose a particular moment in the process which to me is interesting, and that can stand on itself as a static artwork. There is also an important process taking place when I create a print, which involves choosing the paper and seeing how the pigments react to the paper, and how the texture of the paper gives a new dimension to the colors. Actually, working with paper inspired me to introduce grainy textures in my digital artworks and try out gray backgrounds, which is something I am still experimenting with.

In this sense, something that is interesting is that artists today can work in a way that artists before couldn’t: today we can use social media as a lab, by posting tests and experiments and getting a response from your audience. To be honest, it is important for me what my followers say, to have that feedback, because I don’t create the artworks to just put them in a drawer, I want them to be seen.   

Another format that I want to work with is projection. As a VJ, I worked a really long time with a projector. And I’m missing right now that in the equation: I have a screen that emits light. I have a paper that receives light. But the projector does something else, it throws light on a surface. That is way more interesting because that again becomes not just an image, it becomes a lighting solution. And the reason why I haven’t tried that yet is because you need the right projector, the right space with the right amount of light, the right attention from the audience, and stuff like that. It’s nothing new, of course, but I would really like to explore that other avenue.

Fabio Catapano. Colorem 221025, 2022

You have been nominated as one of the ten most influential NFT artists in Italy. What has the NFT market brought to your practice, what do you find most interesting in distributing your work in this format?

There is this well-known saying: “beauty is in the eye of the beholder.” I’d say that also value is in the eye of the beholder. What this means is that, after NFTs, even JPEGs have gained value, a value that is supported by a collective agreement and a collective trust. So we decided that the JPEG from now on is not just a JPEG that one can find on the internet, but is a JPEG that can have a $1 value and tomorrow can increase that value to $2 and so on. So, what the NFT market brought me as an artist is a community and a collective trust that turned digital art into something valuable. We know that digital art has existed for many years, and that it has had its value, but suddenly, we have more attention. And it’s a good thing, because there are many projects, many museum shows, and many new things happening. To me it has also meant being able to proudly say: “I’m a digital artist,” and that people can understand what that means.

Value is in the eye of the beholder. Diagram by Fabio Catapano.

On the other hand, the NFT market brought me some revenue and the opportunity to focus on the practice itself. I launched my Genesis with SuperRare. The series was called Data Collector, and it referred to the fact that nowadays collectors are actually collecting data, a bunch of information that moves from one wallet to another. And suddenly this data has value, because we all agreed that it has. So I took these classic statues and made them into particles that move like data moves from one wallet to another. Beyond art, I think that NFTs and blockchain technology will be very important in many more aspects of our lives.

“What the NFT market brought me as an artist is a community and a collective trust that turned digital art into something valuable.”

Having participated in exhibitions in museums, galleries, and also metaverses, what would you highlight in these spaces as the most interesting for the presentation of your work?

I would say that the one space I don’t like is the metaverse as it is designed right now. I see no reason why I need to have a puppet moving in a digital world, watching very low resolution JPEGs. Why do you need a room at all? Additionally, what is being offered now looks like a cheap version of a video game. In fact, I’d say that Fornite and Minecraft are better “metaverses” than most projects I’ve seen.

Then when it comes to galleries, I have to say that most of the people running these spaces don’t know how to display digital art, because they don’t understand the medium. They don’t understand its physicality and the technology behind it. Now everyone wants to jump on this trend, but there are so many things that you need to consider: choosing the screens, the right environment, the lighting, and so forth. Still, I believe this will change and it will get better.

Fabio Catapano. Colorem 221207, 2022

How would you compare your creative process when working with a brand as a designer and when you are creating as part of your own artistic research?

An artist today has to be many things at once: a designer, a photographer, a marketer… There are a lot of things that probably have been there before, but today even more so because the market is more competitive. In my commercial projects, I didn’t actually create the work for them. Rather, the brand bought an artwork I had made and licensed it to use it in their communications and design. It is more and more common that art and design are combined or fused in some contexts. Design is great, but it can be very dry from a storytelling point of view, while art can push those boundaries and can explore new visions.

Fabio Catapano. Colorem Fragments v1, 2022

You have expressed interest in the possibility of displaying digital art on any screen, in a way that can be compared with street art taking over public space. From the perspective of sociology and anthropology, how do you see this presence of digital art evolving in the future?
It is clear to me that we are increasingly surrounded by screens and digital devices. We have quickly switched from having one television set per home to having multiple TVs, smartphones, tablets, and computers. These screens are also closer to us than the television set ever was, and they are not in one room anymore, they move with us and invade every space we inhabit, also the public space. Looking at films like Blade Runner, I see a future with screens everywhere, in which the content will be customized to every user. This can also happen from an artistic point of view, so for instance the content is actually related to the person that is looking at it. Similarly to what is happening now with NFTs, every person is identified by their wallet and carries their art collection with them, wherever they go. With connected screens, we will be able to take our art with us and enjoy it wherever we are.

What the ¶b≠ôÆ is Glitch Art?

Ask Me Anything by Pau Waelder

Ask Me Anything is a series of articles in the form of conversations, aiming to clarify certain terms, techniques, and debates related to digital art. Our Senior Curator puts 20 years of expertise in digital art at your service to answer your questions, taking only 5 minutes of your time.

Mark Amerika. Mobile Beach, 2007

Hey, what happened? The screen is broken!

What? Ah, don’t worry, the screen is fine. What you are seeing is glitch art.

This is art? But there’s something wrong with the image, it’s not loading properly. Did you check the cables?

Yes, precisely that is how the image is supposed to look. The glitches are what this type of art is about. 

Oh, you can’t be serious… How can this…? I mean, ah… I can’t concentrate with this image jumping around and… and getting all pixelated and broken…

I understand. Let me recommend a simple exercise: take a deep breath… and stop trying to fix the image, just look at the changing patterns on the screen. Don’t think of it as an image of a river, or the portrait of a woman, or whatever it is you are trying to see there. That is just an illusion. The image does not exist, it is just information interpreted by a program and displayed on a screen. 

The image does not exist, it is just information interpreted by a program and displayed on a screen.

I don’t understand. When I download an image to my computer, it is always an image. I see a thumbnail on the desktop, I click on it, and there it is: an image.

Yes, because it is interpreted as such every step of the way by the operating system. But try this simple trick:

1. Click on the filename. Change the extension from .jpg or .png to .txt

2. Open the file. The operating system will use a text editor.

3. You’ll see strings of weird characters that make no sense. Select some and erase them.

4. Save the file. Change the extension back to .jpg or .png

5. Open the file. The image has changed, it is probably broken or cut at some point.

This shows you what I explained before. When everything goes well, you are deceived into seeing a sharp, beautiful image, but when the data is corrupted, not properly transmitted, or there is an error in the program interpreting the data, this is what happens.

Ok I get it. But then, why is this art?

Glitch art is mainly about exploiting an error in a computer system, exposing its inner workings. It is hard to offer a specific definition, since there are many types of glitches and ways of interpreting what a “glitch” can be. Artist and researcher Rosa Menkman, who has extensively worked and theorized about glitch art, puts it this way: 

“A glitch is the most puzzling, difficult to define and enchanting noise artifact; it reveals itself to perception as accident, chaos or laceration and gives a glimpse into normally obfuscated machine language. Rather than creating the illusion of a transparent, well-working interface to information, the glitch captures the machine revealing itself. Glitch artists make use of the accident to ‘disfigure’ flow, image and information, or they exploit the void – a lack of information that creates space for deciphering or interpreting the process of creating (new kinds of) meaning.” [1]

Menkman argues that glitch art goes beyond the aesthetic or the machinic, revealing flaws that are also present in social, political, and knowledge systems. 

Raoul Hausmann, fmsbwtözäu, poster poem. 1918. Centre Pompidou, Musée national d’art moderne-Centre de création industrielle, Paris. Photo: Philippe Migeat

How can art be about error and nonsense?

Different art movements have explored the creative potential of errors and played with the absurd. Take for instance the Dadaists, who proclaimed the futility of art and their distrust of the art system. “Everything one looks at is a fake,” said Tristan Tzara in his Dada Manifesto of 1918. The Surrealists also wanted to disrupt the creative process and access less formal and rational ways of creating art by introducing randomness and spontaneity. 

Ok, but the Dadaists and Surrealists did not use computers.

No, but they faced structured systems with codes and an internal logic that they wanted to disrupt. Using random words to create a poem or creating one out of unintelligible words, such as “dll rrrrr beeeee bö fümms bö,” as Kurt Schwitters did in his Ursonate (1932), is akin to creating a glitch in language, understood as a formal system, and actually developing a different kind of language. Similarly, Glitch Art is not simply about creating a disruption in a computer system, but exploring the creative and expressive capabilities of integrating glitches into a digital image, video, text, sound, or software, among other mediums.

Rosa Menkman and Johan Larsby, Monglot (2011) glitch software.

Hold on, you’re saying that the artists create the glitches?

They sometimes appropriate them, or create the conditions for the glitches to happen. Musicians working with electronic synthesizers already experimented with disrupting the circuits to create noise. Also artists like the duo JODI, who are among the pioneers of net art, explored the aesthetic capabilities of the code hidden behind every website and also with the first modifiable versions of videogames such as Wolfenstein 3D or Quake. Some artists appropriate glitches happening while using computer software, as for instance Ant Scott, who in the early 2000s built a blog collecting screenshots and photos of software crashes and offered an initial definition of glitch art. Others initiate a process aimed at making glitches happen: this is the case of Mark Amerika’s experiment with mobile video in the late 2000s, forcing the capabilities of the mobile phone and the limitations of streaming HD video to generate “datamoshing,” a visible error caused by video compression. Rosa Menkman and Johan Larsby created in 2011 a glitch generator software called Monglot as a way of teaching about glitch at a moment in which it had been widely adopted as a purely aesthetic visual style in music videos and graphic design. More recently, glitch has been frequently adopted in the NFT art scene, as can be found in the work of Domenico Barra, or notably in generative art projects such as Kim Asendorf’s Sabotage. Then some artists are inspired by glitch art but move beyond it, as is the case of Yoshi Sodeoka, whose work connects noise music and glitch into an audiovisual language of his own.

Yoshi Sodeoka, Synthetic Liquid 7, 2022.

Wow, you went full art historian mode there.

I just scratched the surface.

Understood, but now that we have increasingly better high resolution screens,  hyperrealistic 3D simulations, and immersive virtual reality devices, what’s the point of glitch? Isn’t it a bit nostalgic and passé?

I would argue quite the opposite, that as the means to create a convincing virtual reality around us are getting better, we need to counter this sleek, fake hyper-reality with a bit of glitch. Besides, glitch is fun.

Yes, I have to say that once you “get it,” it’s quite fun.

Told ya.

[1] Rosa Menkman. The Glitch Moment(um). Amsterdam: Institute of Network Cultures, 2011, p.29-30, 33.

What is AI Art? A Conversation with ChatGPT

Pau Waelder

Image generated with Open AI’s DALL-E 2

ChatGPT is a machine learning model developed by OpenAI which was recently opened to the public as a research preview, allowing users to test it freely. Similarly to how DALL-E 2 draw widespread attention for its ability to create impressively realistic or stylized images based on text prompts, ChatGPT is now receiving similar responses, since it is capable of producing reasoned explanations and provide answers to follow-up questions, apparently grasping the context of what is being asked. As explained by OpenAI, the model was trained using supervised learning, based on conversations written by humans, in which a question and an answer were provided. Then a reward model was trained using several answers to the same question and having a human labeler rank them from best to worst. A reinforcement learning algorithm was used to optimize the model.  

ChatGPT is able to explain concepts and provide answers considering different aspects of an issue, often maintaining a neutral attitude that seeks to balance out opposing points of view. It usually concludes with a short paragraph that summarizes its previous statements. Seeking to test it, I asked a series of questions regarding art created with Artificial Intelligence programs. It seemed fitting to have an AI system explain AI art to humans. The result is the interview below.

Please note: ChatGPT provides long answers, sometimes using repetitive formulas. I have decided not to make any edits to the text in order to remain true to its outputs, and so to avoid forcing readers to scroll through a very long article I’ve included an index below. Feel free to click on the questions you find most interesting. They are grouped into themes to make browsing easier.

Definitions and history

Creativity

Production

Environmental impact

Aesthetics

Market

Longevity

What is AI art?

AI art is artwork created by artificial intelligence. This can take a variety of forms, but typically involves using algorithms and machine learning techniques to generate visual artwork, music, or other forms of creative expression. AI art often explores the intersection of art and technology, and can involve the use of neural networks and other forms of AI to create unique and unpredictable artistic experiences. Some AI art is created with the goal of imitating human creativity, while other AI art is intended to push the boundaries of what is possible with technology and create entirely new forms of art.

AI art is a type of art that is created using artificial intelligence algorithms and technology. AI art can take many forms, including digital art, paintings, sculptures, and other types of visual art. The algorithms and technology used in AI art can be trained to create original works of art or can be used to enhance or manipulate existing artwork. AI art can be created by both human artists and AI algorithms working together, or it can be created entirely by AI.

[ ↑ Back to top ]

Is AI art a subset of algorithmic art?

AI art can be considered a subset of algorithmic art. Algorithmic art is a term that refers to art that is created using algorithms and other computational processes. AI art is a specific type of algorithmic art that is created using AI technology and algorithms.

In this sense, AI art can be considered a subset of algorithmic art, because it is a specific type of art that is created using algorithms and computational processes. However, it is worth noting that not all algorithmic art is AI art, and there are many other types of algorithmic art that do not use AI technology.

Overall, while AI art is a type of algorithmic art, it is not the only type of algorithmic art, and there are many other types of algorithmic art that do not use AI technology.

[ ↑ Back to top ]

Not all algorithmic art is AI art, and there are many other types of algorithmic art that do not use AI technology.

What role does AI art play in the history of contemporary art?

AI art is a relatively new development in the history of modern and contemporary art, and its role is still evolving. However, AI art has already had a significant impact on the art world, and it has introduced new ideas and possibilities for artists and art enthusiasts.

One of the key ways in which AI art has impacted the art world is by challenging traditional notions of what constitutes art and who can create it. Because AI algorithms and technology can be used to create art without the need for human intervention, AI art has called into question the idea that art must be created by human artists in order to be considered “real” art.

Additionally, AI art has also introduced new possibilities for collaboration between human artists and AI technology. Many artists are now experimenting with using AI algorithms and technology as part of their creative process, and this has led to the emergence of new styles and genres of art that are not possible without the use of AI technology.

Overall, while AI art is a relatively new development in the history of modern and contemporary art, it has already had a significant impact on the art world, and it is likely to continue to play an important role in the future.

[ ↑ Back to top ]

How did AI art come about?

AI art has its roots in the field of artificial intelligence (AI), which is the study of how to create computer systems that can simulate human intelligence. The development of AI technology has been ongoing for several decades, and it has led to the creation of many different AI algorithms and technologies that can be used to solve complex problems and to automate various tasks.

One of the areas in which AI technology has been applied is in the creation of art. By training AI algorithms on large datasets of artwork, it is possible to create AI programs that can generate novel and original artworks based on the information they have learned.

The first known examples of AI art date back to the 1960s, when researchers began to experiment with using AI algorithms to create visual art. Since then, the field of AI art has continued to evolve and grow, and there are now many different AI algorithms and technologies that can be used to create a wide range of different types of art.

Overall, AI art has come about as a result of the development of AI technology and algorithms, and the application of these technologies to the field of art.

[ ↑ Back to top ]

Which was the first AI artwork ever created?

The first known examples of AI art date back to the 1960s, when researchers began to experiment with using AI algorithms to create visual art. One of the first known AI artworks was created by the computer scientist Harold Cohen in the early 1960s. Cohen developed a program called AARON, which was able to create abstract drawings based on a set of rules and instructions that Cohen provided.

Another early example of AI art was created by the artist Harold Cohen in the early 1970s. Cohen developed a program called AARON, which was able to create abstract drawings based on a set of rules and instructions that Cohen provided.

Overall, while the exact details of the first AI artwork are not well-documented, it is likely that one of the first AI artworks was created by Harold Cohen in the early 1960s.

[ ↑ Back to top ]

Can you name the main artists who have created art with AI?

There are many artists who have created art using AI technology and algorithms. Some of the most notable and well-known artists who have created AI art include:

  • Mario Klingemann: Klingemann is a German artist who creates AI art using a variety of different algorithms and technologies. His work often explores the intersection of art and technology, and he has exhibited his AI art in galleries and museums around the world.
  • Trevor Paglen: Paglen is an American artist who creates AI art that explores the relationship between technology, surveillance, and power. His AI artworks often take the form of large-scale installations and projections, and he has exhibited his work in galleries and museums around the world.
  • Anna Ridler: Ridler is a British artist who creates AI art using deep learning algorithms and machine learning techniques. Her AI art often explores themes of identity and the human condition, and she has exhibited her work in galleries and museums around the world.
  • Sougwen Chung: Chung is a Canadian artist who creates AI art that explores the relationship between humans and machines. Her work often involves collaborative projects between human artists and AI algorithms, and she has exhibited her AI art in galleries and museums around the world.

Overall, there are many artists who have created AI art, and their work covers a wide range of styles, techniques, and themes. These artists are at the forefront of exploring the potential of AI technology and algorithms to create art, and their work has been exhibited and recognized in galleries and museums around the world.

[ ↑ Back to top ]

Who creates the art, the artist or the machine?

Both the artist and the machine can create AI art. Some AI art is created entirely by AI algorithms, without any human input. In these cases, the algorithms themselves are the “artists” creating the artwork. Other AI art is created by human artists using AI technology and algorithms to assist them in the creative process. In these cases, the human artist is the creator of the artwork, but the AI technology and algorithms are an important part of the creative process. In some cases, AI art may be created by a collaboration between a human artist and AI algorithms, with both contributing to the final artwork.

[ ↑ Back to top ]

How can an algorithm be an artist? Isn’t creativity exclusively human?

The question of whether creativity is exclusively human is a complex one, and there is no definitive answer. Some people believe that creativity is a uniquely human trait, while others believe that it is possible for machines and algorithms to be creative.

One way to approach this question is to consider what we mean by creativity. Creativity can be defined as the ability to produce something new and original. While this definition may seem to exclude machines and algorithms, it is important to remember that these tools can be used in ways that were not possible before, and that they can help us to explore new ideas and create new and original works of art.

The algorithms themselves can be seen as creative agents, capable of producing new and original works of art.

In the case of AI art, the algorithms and technology used can be trained on vast amounts of data and can be designed to explore a wide range of possibilities. This can allow the algorithms to generate artwork that is novel and original, even if it is created entirely by a machine. In this sense, the algorithms themselves can be seen as creative agents, capable of producing new and original works of art.

Ultimately, whether we consider AI art to be the work of a human artist, the machine, or a collaboration between the two, will depend on our individual perspectives and values. What is important is that AI art allows us to explore new possibilities and to create art in ways that were not previously possible.

[ ↑ Back to top ]

Why would an AI program create art for humans? Wouldn’t it be more interesting to create art that only another AI can understand?

AI programs can create art for humans or for other AI programs, depending on the specific goals and objectives of the program and the designers who created it. In some cases, an AI program may be designed to create art that only another AI program can understand, for example, as part of a research project or to explore the capabilities of AI algorithms.

However, in many cases, AI programs are designed to create art that can be understood and appreciated by humans. This is because art is often a way for people to express themselves and to communicate with others, and creating art that can be understood and appreciated by humans is an important part of this process.

Additionally, creating art that can be understood and appreciated by humans can also help to demonstrate the capabilities of AI technology and algorithms. By creating art that people can understand and appreciate, AI programs can show the potential of AI technology to generate novel and interesting ideas, and to explore new creative possibilities.

Overall, whether an AI program creates art for humans or for other AI programs will depend on the specific goals and objectives of the program and the designers who created it.

[ ↑ Back to top ]

Who should sign an artwork created with AI?

The question of who should sign an artwork created with AI is a complex one, and there is no definitive answer. The specific approach that an artist takes when signing AI-generated art will depend on the individual artist and their goals and values.

In some cases, an artist may choose to sign an artwork created with AI themselves, as the primary creator of the artwork. This could be because the artist believes that they were the primary driving force behind the creation of the artwork, and that the AI program was only a tool that was used to assist in the creative process.

On the other hand, some artists may choose to sign the artwork with the name of the AI program that was used to create the art. This could be seen as a way of recognizing the contributions of the AI program to the creation of the artwork, and it could be an interesting and unique way to explore the relationship between human artists and AI technology.

In some cases, an artist may choose to sign an artwork created with AI using both their own name and the name of the AI program. This could be seen as a way of recognizing the contributions of both the human artist and the AI program to the creation of the artwork.

Ultimately, the approach that an artist takes when signing AI-generated art will depend on their individual goals and values. Some artists may choose to sign the artwork themselves, while others may choose to sign it with the name of the AI program, and still others may choose a combination of both. What is important is that the artist is transparent about their approach and that they are fair and reasonable in their dealings with the AI program and any other parties involved.

[ ↑ Back to top ]

If AI programs are written by humans, isn’t AI art entirely created by humans, in the end?

It is true that AI programs are written by humans, and that the technology and algorithms used in AI art are created by humans. However, it is important to remember that AI programs are not just simple, pre-programmed tools that are used by human artists. Instead, AI programs are capable of learning from data and of making decisions and predictions based on that learning.

In the case of AI art, this means that the AI algorithms and technology are not just blindly executing instructions that have been written by humans. Instead, the AI algorithms are able to generate novel and original artwork based on the information they have learned from the training data.

This means that while AI art is not created entirely by humans, it is also not created entirely by AI algorithms. Instead, it is the result of a collaboration between human creators and AI technology, with both playing important roles in the creation of the final artwork.

Overall, while AI art is not created entirely by humans, it is also not created entirely by AI algorithms. Instead, it is the result of a collaboration between human creators and AI technology.

[ ↑ Back to top ]

Which tools are there to create AI art?

There are many different tools and technologies that can be used to create AI art. Some of the most commonly used tools and technologies for creating AI art include:

  • Machine learning algorithms, such as neural networks, that can be trained on data to generate art
  • Generative adversarial networks (GANs), which can be used to create realistic images and other types of art
  • Deep learning algorithms, which can be used to create detailed and complex artworks
  • Computer vision algorithms, which can be used to analyze and interpret visual information
  • Natural language processing algorithms, which can be used to generate text-based art, such as poetry or fiction

These are just a few examples of the tools and technologies that can be used to create AI art, and there are many other options available. The specific tools and technologies used to create AI art will depend on the individual artist and their goals and objectives.

[ ↑ Back to top ]

Which are the most commonly used AI programs?

There are many different AI programs and technologies that are commonly used for creating AI art. Some of the most commonly used AI programs include:

  • GANs (Generative Adversarial Networks): GANs are a type of AI algorithm that is often used for creating AI art. GANs consist of two neural networks – a generator network and a discriminator network – that work together to generate novel and original artworks based on a dataset of existing artwork.
  • Style transfer algorithms: Style transfer algorithms are a type of AI algorithm that is used to apply the style of one image to another image. This can be used to create AI art by applying the style of a famous painting or artist to a new image, creating a novel and original artwork that combines the style of the original with the content of the new image.
  • Evolutionary algorithms: Evolutionary algorithms are a type of AI algorithm that is inspired by the principles of natural selection and evolution. These algorithms can be used to create AI art by starting with a population of random artworks, and then using a set of rules and criteria to select the “fittest” artworks from each generation, allowing the art to evolve over time.
  • Deep learning algorithms: Deep learning algorithms are a type of AI algorithm that is based on the structure and function of the human brain. These algorithms can be used to create AI art by training the algorithm on a dataset of existing artworks, and then using the trained algorithm to generate novel and original art.

Overall, there are many different AI programs and technologies that are commonly used for creating AI art, and the specific program that is used will depend on the goals and objectives of the artist.

[ ↑ Back to top ]

Which programming languages are used to create AI art?

There are many different programming languages that can be used to create AI art. Some of the most commonly used programming languages for creating AI art include:

  • Python: Python is a popular and versatile programming language that is often used for AI and machine learning applications. It has a large and active community of developers and users, and it offers a wide range of libraries and tools for working with AI algorithms and technology.
  • R: R is a programming language that is commonly used for statistical computing and data analysis. It has a strong focus on data visualization and is often used for creating AI art that involves the generation of complex images and other visual elements.
  • JavaScript: JavaScript is a popular programming language that is often used for web development. It is a versatile language that can be used for a wide range of different AI and machine learning applications, including the creation of AI art.
  • Java: Java is a popular and widely used programming language that is often used for developing large-scale, enterprise-level applications. It is a versatile language that can be used for creating AI art, and it has a large and active community of developers and users.

These are just a few examples of the programming languages that can be used to create AI art, and there are many other options available. The specific programming language that is used to create AI art will depend on the individual artist and their goals and objectives.

[ ↑ Back to top ]

Does an artist need to code to create this type of art?

An artist does not necessarily need to know how to code in order to create AI art. While knowledge of coding can be helpful for creating AI art, it is not a requirement.

There are many different tools and technologies available that can be used to create AI art without needing to know how to code. For example, there are several user-friendly AI platforms and tools that can be used to create AI art without needing to know how to code. These tools often provide a simple, intuitive interface that allows users to create AI art by dragging and dropping different elements and using pre-defined algorithms and settings.

There are many artists who collaborate with coders and other technical experts to create AI art.

Additionally, there are many artists who collaborate with coders and other technical experts to create AI art. In these cases, the artist can focus on the creative aspects of the art, while the coders and technical experts handle the technical aspects of creating the AI art.

Overall, while knowledge of coding can be helpful for creating AI art, it is not a requirement, and there are many different tools and technologies available that can be used to create AI art without needing to know how to code.

[ ↑ Back to top ]

Do you need a very powerful computer?

The specific hardware requirements for creating AI art will depend on the specific AI algorithms and technologies that are being used, as well as the complexity and scale of the artworks being created. In general, however, creating AI art often requires a significant amount of computing power, and it is not something that can be done on a standard desktop or laptop computer.

In many cases, creating AI art will require the use of a high-performance computer with a powerful processor and a large amount of RAM. This type of computer will be able to handle the complex computations and data processing that is required for creating AI art, and it will be able to generate high-quality artworks in a reasonable amount of time.

Additionally, some AI algorithms and technologies may require the use of specialized hardware, such as graphics processing units (GPUs), in order to run efficiently and to generate high-quality artworks. In these cases, a computer with one or more powerful GPUs will be required in order to create AI art.

Overall, while the specific hardware requirements for creating AI art will vary depending on the algorithms and technologies being used, it is generally a resource-intensive task that requires a powerful computer with a large amount of computing power.

[ ↑ Back to top ]

Does this mean, then, that only a handful of artists can create AI art?

The ability to create AI art is not limited to a small group of artists. While creating AI art does require a certain level of technical expertise and access to the necessary tools and technologies, these are not necessarily difficult or expensive to obtain.

There are many different tools and technologies available that can be used to create AI art, and many of these are freely available or can be purchased at a low cost. Additionally, there are many online resources and tutorials that can help artists to learn the technical skills and knowledge that are needed to create AI art.

Anyone who is interested in creating AI art can learn the necessary skills and obtain the necessary tools and technologies

As a result, while creating AI art does require some technical expertise and access to the necessary tools and technologies, it is not something that is limited to a small group of artists. Anyone who is interested in creating AI art can learn the necessary skills and obtain the necessary tools and technologies, and there are many different resources available to help them do so.

[ ↑ Back to top ]

Will AI make artists lazy?

The use of AI technology in the creation of art will not necessarily make artists lazy. In fact, AI technology can be a powerful tool that can assist artists in the creative process and help them to explore new ideas and possibilities.

However, it is important to remember that AI technology is just a tool, and it cannot replace the creativity and vision of human artists. While AI algorithms and technology can be used to generate suggestions or ideas for artwork, it is ultimately up to the human artist to decide which ideas to pursue and how to incorporate them into their work.

Additionally, the use of AI technology in the creation of art can also help to inspire and challenge artists to think in new ways and to push the boundaries of what is possible. In this way, the use of AI technology can actually encourage artists to be more creative and innovative, rather than making them lazy.

Overall, while the use of AI technology in the creation of art can be a powerful tool, it will not necessarily make artists lazy. Instead, it can assist artists in the creative process and help them to explore new ideas and possibilities.

[ ↑ Back to top ]

Can it lead to an overproduction of art?

The use of AI programs in the creation of art could potentially lead to an overproduction of art. Because AI algorithms and technology can be used to generate art quickly and efficiently, it is possible that the use of AI could result in a large number of artworks being produced in a short period of time.

However, it is important to remember that the use of AI in the creation of art is not a new development, and it has been happening for several years. As a result, the art market has already had time to adjust to the use of AI technology, and the potential for overproduction of art is not as great as it may seem.

Additionally, it is worth considering that the quality and value of AI-generated art will vary, and not all AI-generated art will be of equal quality or worth. This means that while there may be a large number of AI-generated artworks produced, not all of them will be considered valuable or worth collecting.

Overall, while the use of AI programs in the creation of art could potentially lead to an overproduction of art, the potential for overproduction is not as great as it may seem, and the quality and value of AI-generated art will vary.

[ ↑ Back to top ]

What is the environmental impact of all of this?

The environmental impact of creating AI art will depend on a variety of factors, including the specific tools and technologies that are being used, the amount of computing power that is required, and the conditions under which the art is created.

In general, however, creating AI art can be a resource-intensive process that requires a significant amount of computing power. This can have an impact on the environment, both in terms of the energy that is consumed and the potential for electronic waste.

Additionally, some AI algorithms and technologies may require the use of specialized hardware, such as graphics processing units (GPUs), which can be expensive to purchase and difficult to dispose of in an environmentally-friendly manner.

Overall, the environmental impact of creating AI art will depend on the specific tools and technologies that are being used, and it is important for artists to consider this impact when creating AI art. There are many different ways to reduce the environmental impact of creating AI art, such as using energy-efficient hardware and recycling or repurposing old or obsolete technology.

[ ↑ Back to top ]

And if in the future many more artists can use AI programs and these are resource-intensive, does this mean that AI art can be bad for the planet?

If the use of AI programs for creating art becomes more widespread in the future, it is possible that it could have a negative impact on the environment. As more and more artists use AI algorithms and technology to create art, the demand for computing power and specialized hardware will increase, and this could lead to an increase in energy consumption and electronic waste.

However, it is important to note that the potential negative impact of creating AI art on the environment is not unique to AI art, and it is something that is also a concern for other forms of art making that are resource-intensive. Additionally, it is also possible that advancements in AI technology and algorithms could lead to more efficient and environmentally-friendly ways of creating AI art, which could help to mitigate the potential negative impact on the environment.

Overall, while the use of AI programs for creating art could potentially have a negative impact on the environment, this is not unique to AI art, and it is something that can be addressed through the use of efficient and environmentally-friendly technology and practices.

[ ↑ Back to top ]

Do people really like AI art, or are they in it just for the novelty?

It is difficult to say whether people truly like AI art, or if they are simply interested in its novelty. There are certainly many people who are intrigued by the idea of AI art, and they may be drawn to these artworks because they are something new and different.

However, there are also many people who genuinely appreciate the aesthetic qualities of AI art, and who enjoy the unique perspective and ideas that these artworks can offer. Some people may find AI art to be thought-provoking and engaging, and they may enjoy the challenge of trying to understand and interpret these artworks.

Overall, it is likely that people’s reactions to AI art will vary, and there will be some people who are interested in it simply because it is novel, while others will genuinely appreciate and enjoy the art itself.

[ ↑ Back to top ]

Why do most AI generated artworks produce nightmarish images?

Most AI-generated artworks do not produce nightmarish images. In fact, many AI-generated artworks are quite beautiful and interesting. However, it is true that some AI-generated artworks can produce images that some people might find disturbing or unsettling.

There are a few reasons why this may be the case. One reason is that AI algorithms are designed to explore a wide range of possibilities, and this can sometimes result in the generation of images that are unusual or unexpected. Another reason is that the training data used to teach the AI algorithm may contain images that some people find disturbing, and this can influence the types of images the algorithm generates.

It is also worth noting that what one person finds disturbing or nightmarish may not be the same for another person. What is considered “nightmarish” is a matter of personal opinion, and different people may have different reactions to the same image.

Overall, while some AI-generated artworks may produce images that some people find disturbing, this is not the case for all AI-generated art, and many AI-generated artworks are beautiful and interesting.

[ ↑ Back to top ]

Why does GAN generated art look like bad painting?

Not all GAN-generated art looks like bad painting. GANs, or Generative Adversarial Networks, are a type of AI algorithm that can be used to generate art, and the specific appearance of the art generated by a GAN will depend on the training data and other factors.

It is true that some GAN-generated art may appear to be low-quality or “bad” painting. This can happen for a few reasons. One reason is that the training data used to teach the GAN may not be of high quality, which can result in the GAN generating art that is less detailed or less realistic than it could be. Another reason is that the GAN may not be trained for a long enough time, or it may not be trained on a large enough dataset, which can also result in lower-quality art.

It is also worth noting that what one person considers to be “bad” painting may not be the same for another person. Beauty and quality are subjective, and different people may have different opinions about the same artwork.

Overall, while some GAN-generated art may appear to be low-quality or “bad” painting, this is not the case for all GAN-generated art, and the specific appearance of GAN-generated art will depend on a variety of factors.

[ ↑ Back to top ]

Should we reconsider our notions of art when exploring art made with AI?

The use of AI technology and algorithms to create art can certainly challenge our traditional notions of art and the creative process. Because AI algorithms and technology can be used to create art without the need for human intervention, AI art calls into question the idea that art must be created by human artists in order to be considered “real” art.

Additionally, AI art can also challenge our ideas about what makes art valuable and interesting. Because AI algorithms are able to generate artworks that are highly detailed and complex, they can create art that is aesthetically pleasing and engaging. This can challenge the idea that art must be created by human artists in order to have value or meaning.

Overall, exploring AI art can certainly lead us to reconsider our notions of art and the creative process. By challenging our traditional ideas about what constitutes art and what makes art valuable, AI art can help us to see art in new and interesting ways.

[ ↑ Back to top ]

Is AI art a good investment?

The potential for AI art to be a good investment will depend on a variety of factors, including the specific artworks being considered, the reputation and track record of the artist, and the overall state of the art market.

In general, however, AI art is still a relatively new and untested area of the art market, and there is a lot of uncertainty around the value and potential appreciation of AI artworks. Many buyers and collectors may be hesitant to invest in AI art due to its novelty and the lack of a track record for these types of artworks.

Additionally, the market for AI art is still relatively small, and there may not be a large number of buyers and collectors who are interested in these artworks. This could limit the potential for AI art to appreciate in value over time, and it could make it difficult for investors to find buyers for their AI artworks.

Overall, while there may be some potential for AI art to be a good investment, there are also many factors that could limit its value and potential appreciation. It is important for investors to carefully consider these factors before deciding to invest in AI art.

[ ↑ Back to top ]

When selling art created with AI, should the artist split their benefits with the AI program?

Whether or not an artist should split their benefits with an AI program when selling art created with AI will depend on the specific circumstances and the agreement between the artist and the AI program. In some cases, the artist and the AI program may have agreed to share the benefits from the sale of the art, in which case the artist would need to split their benefits with the AI program.

However, in other cases, the artist and the AI program may not have agreed to share the benefits from the sale of the art. In these cases, the artist would not be obligated to split their benefits with the AI program, and they would be entitled to keep all of the proceeds from the sale of the art.

Ultimately, whether or not an artist should split their benefits with an AI program when selling art created with AI will depend on the specific circumstances and the agreement between the artist and the AI program. It is always a good idea for the artist and the AI program to carefully consider and agree on these issues before starting to create art together.

[ ↑ Back to top ]

Will AI-generated art be preserved in the same way as paintings and sculptures? Will it last for centuries?

It is not clear whether AI-generated art will be preserved in the same way as paintings and sculptures. The preservation of art is a complex process that involves a variety of factors, including the materials used to create the art, the conditions in which the art is stored, and the care and attention given to the art over time.

In the case of AI-generated art, the specific approach to preservation will depend on the materials and technology used to create the art. For example, some AI-generated art may be created using digital media, such as video or audio, which may be more susceptible to degradation or loss over time. In these cases, the art may require special preservation techniques in order to ensure that it is preserved for future generations.

On the other hand, some AI-generated art may be created using physical materials, such as paint or clay, which may be more durable and easier to preserve over time. In these cases, the art may be more likely to last for centuries, just like traditional paintings and sculptures.

Overall, the preservation of AI-generated art will depend on the materials and technology used to create the art, as well as the care and attention given to the art over time. It is not clear whether AI-generated art will be preserved in the same way as traditional paintings and sculptures, and it will ultimately depend on the specific artworks and the circumstances in which they are created.

[ ↑ Back to top ]

Will it replace other forms of art making?

It is unlikely that AI-generated art will replace other forms of art making. While AI technology and algorithms can be used to create interesting and novel artworks, they are not a replacement for the creativity and vision of human artists.

Additionally, there are many different forms of art making, and the use of AI technology and algorithms is just one of many different approaches that artists can take. As a result, it is unlikely that AI-generated art will replace other forms of art making, and it will likely continue to exist alongside other forms of art making.

Overall, while AI technology and algorithms can be used to create interesting and novel artworks, it is unlikely that AI-generated art will replace other forms of art making. Instead, it is likely that AI-generated art will continue to exist alongside other forms of art making, as one of many different approaches that artists can take.

[ ↑ Back to top ]

Do you think that AI art will get better with time?

It is likely that AI art will get better with time, as AI technology and algorithms continue to advance and improve. As AI algorithms become more sophisticated and are able to learn from larger and more diverse datasets, they will be able to generate more detailed and realistic artwork.

Additionally, as more and more artists begin to experiment with AI technology and algorithms in their creative process, they will develop new techniques and approaches that can help to improve the quality of AI-generated art. This could lead to the emergence of new styles and genres of AI art, and it could help to push the boundaries of what is possible with AI technology.

Overall, it is likely that AI art will continue to improve and evolve over time, as AI technology and algorithms advance and as more artists begin to experiment with AI in their creative process.

[ ↑ Back to top ]

Over its history, AI technology has frequently promised more than it could deliver. Is this the case with AI art right now?

AI technology has indeed often promised more than it could deliver in the past. However, this is not necessarily the case with AI art. While there are limitations to what AI technology and algorithms can currently do in terms of creating art, there are also many interesting and impressive examples of AI art that demonstrate the potential of this technology.

One of the key limitations of AI art is that it is currently unable to create art that is truly original and creative in the same way that human artists can. AI algorithms and technology are still limited in their ability to understand and generate novel ideas, and most AI-generated art is based on existing data and patterns.

However, despite this limitation, there are many examples of AI art that are impressive and thought-provoking. Some AI algorithms and technologies are able to create art that is highly detailed and complex, and there are many interesting and novel ways in which AI technology can be used to create art.

Overall, while there are limitations to what AI technology and algorithms can currently do in terms of creating art, there are also many interesting and impressive examples of AI art that demonstrate the potential of this technology.

[ ↑ Back to top ]

Tamiko Thiel: when art augments reality

Pau Waelder

Tamiko Thiel is a pioneering visual artist exploring the interplay of place, space, the body and cultural identity in works encompassing an artificial intelligence (AI) supercomputer, objects, installations, digital prints in 2D and 3D, videos, interactive 3d virtual worlds (VR), augmented reality (AR) and artificial intelligence art. In this conversation, that took place on the occasion of the launch of her solo artcast Invisible Nature curated by DAM Projects, she discusses the evolution of technology over the last three decades, her early AR artworks and her commitment to create art that invites reflection.

Your work is characterized by the use of Virtual Reality and Augmented Reality technologies, with pioneering artistic projects. Which technical challenges have you met over the last decades in the creation of these projects?

My first exposure to real time computer graphics was at MIT when I was a graduate student in 1982. At that point,  writing everything from scratch, you had to program for a semester in order to get a cube that would rotate in three dimensions. Coming from an artistic and design background, I felt that this is not really where I want to create art right now, I’ll have to wait. And then about 10 years later, in 1992, Silicon Graphics came out with OpenGL, an open standard that made it possible to do real time interactive computer graphics on PCs. Then in 1994, I started to work with a company called Worlds Incorporated, which was taking this new potential for doing interactive 3D computer graphics on PCs connected to the Internet. At that time I worked with Steven Spielberg on the Starbright World Project, the first 3d online Metaverse for ill children, a virtual world where they could momentarily escape the space of the hospital. This first Metaverse was running on high end PCs, with fast connections provided by various high tech companies, but it was still unaffordable for people at home. The project ran from 1994 to 1997, and at that time the technology was still unstable.

“It takes maybe 10 to 15 or 20 years to get there instead of the five years that all the evangelists predict.”

So you must jump from that to 10 years later, when Second Life came about and this time people had more powerful graphic cards and ADSL connections at home. Second Life was able to create a much more developed virtual world, which seemed like the next phase of the Internet and all the corporations wanted to move there. Then around 2007-2008, probably due to the financial crisis, but also the rise of Facebook, which allowed people to share photographs on a common platform, the excitement around Second Life fizzled. And then if we jump another 15 years more, we find ourselves with still bigger processing power and faster connections. Now it is much easier to create virtual worlds than it was 25 years ago, partly because it is easier to create 3D objects, or you can buy them online, and also because of the advancements in hardware and software.

So, as you can see, big steps come on later than you think. It takes maybe 10 to 15 or 20 years to get there instead of the five years that all the evangelists predict. People talked about virtual reality at that time in the 90s as being a failure, just as they talked about AI being a failure in the 80s and 90s. And what they don’t realize is that technological change takes longer than you’d want it to. So it’s wrong to call it a failure. It’s more like: “Okay, we have to keep on working on this.” And if you wait long enough, 20 years or so, then you’ll get it.

Video by Tamiko Thiel, Rewilding the Smithsonian, 2021.  Created with the ReWildAR AR app (2021, with /p). Commissioned by curator Ashley Molese for the 175th anniversary of the Smithsonian Institution, in the Arts and Industries Building.

Interactive 3D and VR artworks such as Beyond Manzanar and Virtuelle Mauer have a strong narrative component as they explore historic and political issues. What is the role of the user in constructing these narratives?

Basically, what I tend to do is look for key moments that I think can be expressed and experienced and communicated better in virtual reality than in other media. In Beyond Manzanar, for me that was the moment where you’re sitting in a beautiful Paradise Garden, and  you see the mountains covered in snow around you. This is an image from the book Farewell to Manzanar by Jeanne Wakatsuki Houston: the author tells that when she was an eight-year-old and she was imprisoned in the camp, she would pick a viewpoint where she couldn’t see any guard towers, any barracks, nor barbed wire fence. And she tried not to move for the longest time, because as long as she didn’t move, she could preserve the illusion she was in paradise of her own free will. As soon as she moved, she saw that she was indeed in prison, she fell out of paradise back into prison. And so this moment occurs in Beyond Manzanar, where you enter a garden which is framed by the beautiful mountains. But if you go too deeply into the garden, then boom! – the garden disappears, and you’re back in the prison camp. 

Beyond Manzanar (2000). An interactive virtual reality large projection installation by Tamiko Thiel and Zara Houshmand.

My second piece, The Travels of Mariko Horo, has a much more complicated structure with several heavens imagined by a time traveling 12th century Japanese female artist inventing the West in her imagination. In this work there is this moment when you enter the different churches, which are in fact liminal spaces between the prosaic everyday life and the world of the supernatural. When you cross that threshold, Mariko Horo takes you to heaven or takes you to hell. But it is always by your own free will, you’re always making the decision and making the motions that all of a sudden present you with the consequences of your decisions. 

The Travels of Mariko Horo (2006). An interactive 3D virtual reality installation
By Tamiko Thiel, with original music by Ping Jin.

Finally, in Virtuelle Mauer/ReConstructing the Wall, I introduced some characters that take you in a time travel through the history of the Berlin Wall. But if you cross over the invisible boundaries of the former Death Strip,, then you fall back into the 80s, the wall appears behind you. So in all three pieces, it’s really about letting you feel like you have the freedom to go anywhere you want and do anything you want to do. But then you must face the consequences of these actions, which might take you to Paradise or they might take you to prison. But you always feel like it was your decision to go there, or to examine this, and therefore you’re sort of complicit with whatever happens to you.

Video by Tamiko Thiel, Atmos Sphaerae, 2022. Created with the Atmos Sphaerae VR artwork, 2021.

Creating artworks in Augmented Reality offers the possibility of intervening in institutional art spaces uninvited, as you did at MoMA, the Venice Biennale, or TATE Modern, or within a curated exhibition, as is the case with Unexpected Growth, which was shown at the Whitney Museum of American Art. Can you tell us about the creative process in both cases and your experience with “guerrilla” interventions versus curated exhibitions using the same technology?

Let’s start with We AR in MoMA, an augmented reality project created by Sander Veenhof and Mark Skwarek that took place at the Museum of Modern Art in New York on October 9th, 2010. The iPhone had been around since 2007, as well as other smartphone models, and in the course of 2009 both Mark and Sander had been playing around with the technology and developing AR artworks on mobiles in public spaces. And then they realized they could also geolocate the artworks to have them appear in certain spaces, so they came up with this idea of doing the spectacular intervention at MoMA. I knew Mark from the art circles before we had both shown in the 2009 Boston CyberArts Festival, so he dropped me and many of his artist friends an email saying: “Hey, we’re able to do this now. Send me some content and I’ll put it up and we’ll do a flashmob at MoMA.” They were not asking permission from MoMA. They didn’t know about it, and they couldn’t stop us. At that time, people didn’t realize that location based AR could be used anywhere. But then it turned out that they did find out about it beforehand, because Mark and Sander were doing the intervention as part of a citywide public art festival of psychogeography, so it was publicly announced by the festival all on Twitter. MoMA actually posted a link to the festival and said: “Hey, looks like we’re going to be invaded by AR,” which was very forward thinking and embracing this new development in technology. So, that was incredibly good publicity. It was a really exciting moment, when we realized that there were these possibilities that the new technology was bringing about. I would say this was a path breaking exhibit in the history of media. 

After this intervention at MoMA, the artists who took part in it created the group Manifest.AR. We were thinking about where to do the next incursion, and since I live in Munich, which is a six and a half hour beautiful train ride to Venice, I suggested we go to the Venice Biennale in 2011. It was a group of about eight of us. We created virtual pavilions that were located inside the Giardini and at Piazza San Marco, so that people who didn’t want to spend money to enter the Giardini could also experience the artworks in a public space, because the Giardini, with its walls around it is a classically closed curatorial space. The point was that having your work shown at the MoMA or the Biennale is a sign of achievement, of having been able to enter these closed curatorial spaces, but now with AR interventions that was not true anymore, anybody can place their artwork wherever they want. But then people’s reaction was: “Oh, wow, you’re showing in the Venice Bienniale, you’ve made it!” Then we told them we hadn’t been curated and that we were doing this of our own accord, but people would respond: “Oh, that’s even better.” So we thought we were doing this sort of Duchampian breakdown of all sorts of structures that define prominence in the art world. Duchamp exhibited his famous urinal not to say that an artwork becomes an artwork when an artist says it’s an artwork and places it in an art context, but to state that this whole thing is ridiculous. 

“The point of putting our artworks at the MoMA or the Venice Biennale was that with Augmented Reality anybody can place their artwork wherever they want.”

These interventions gave us a feeling of exhilaration that we could hold our own exhibits anywhere, even though no one in the art world was interested in media art at that moment. And we could also play off site. Because AR is a site-specific medium, you’re always dealing with the site. And that opened up whole new possibilities. Interestingly, shortly after that, George Fifield, the Boston Cyberarts director, arranged our first invitational show at the ICA Boston. This was in April of 2011. The ICA curators didn’t understand how the technology works. They said: “Okay, you can do it on the first floor, but not on the second floor. You can do it in the lobby and outside, but you can’t do it inside of the galleries.” And we had to tell them it doesn’t work that way. The artworks are triggered by a GPS location which has a radius of a mile or so.

As for showing Unexpected Growth at the Whitney Museum of American Art in New York, it was thanks to Christiane Paul, the adjunct curator of media art at the museum. I have known her for quite a while, I think since about 2002, and she has curated me into many of her shows over the years in different venues, but this was the first time at the Whitney. She had of course done the visionary work of creating Artport, a space for net art supported by the museum, but she still hadn’t placed an AR artwork inside the museum. Then in 2014 she commissioned an AR intervention by Will PappenheimerProxy, 5-WM2A, at the Whitney’s final closing gala for the old Breuer Building. So when she contacted me in 2018 to create an artwork to show at the Whitney, she had already gone through the process of introducing this technology in the museum. She invited me to create an artwork for the terrace, which is 20 by 10 meters in size. Since this was a big show, I needed to make sure that the piece would work properly, so I contacted the people at Layar, the AR app we had used in all our previous interventions, but by then they told me they would shut down their servers, so I had to find a solution. My husband Peter Graf, who is a software developer, told me he could write an app for me. We worked side by side on this project, so I realized he should co-author it with me and he came up with the artist name /p, so now the artwork is in the Whitney collection credited to myself and /p in collaboration. Now the artwork is not officially on view at the museum, but if you download our app and go to the terrace you can still experience it.

Video by Tamiko Thiel, Unexpected Growth (Whitney Museum Walk1), 2018. Created with the Unexpected Growth AR app (2018, with /p), commissioned by and in the collection of the Whitney Museum of American Art.

There is also the fact that the artworks are invisible, so how did you communicate their existence and solve the technical problems associated with having the proper device, software, and connectivity?

At the Venice Biennale intervention, Sander got in touch with Simona Lodi, director of the Share Festival Turin , and the artist group Les Liens Invisibles, who were together mounting another AR intervention The Invisible Pavilion. We created a common postcard with QR codes to download the app. We also invited people to come to Piazza San Marco and the Giardini on certain days and times and help them experience the artworks. Collaborating with the team from the Share Festival was a huge help, because those of us from outside of Italy had terrible connection issues, and also it was the first Venice Biennale when hordes of people were walking around with their cellphones, overloading the networks. The Vodafone network actually broke down in the Venice area. Gionatan Quintini of Les Liens Invisible loaned me his smartphone to show my work, and this is an example of the kind of collaborative atmosphere that you get in the media art world and that is not that easy to find in the contemporary art world. By connecting our networks with those of Share, we got a lot of publicity for both the interventions in MoMA and in the Venice Biennial, and that put AR in this early time into the media art history books, and therefore into the art canon.

Video by Tamiko Thiel, Sponge Space Trash Takeover (Walk1), 2020. Created with the VR space “Sponge Space Trash Takeover” courtesy of Cyan Planet and xR Hub Bavaria.

The artworks in your latest artcast titled Tamiko Thiel: Invisible Nature all deal with different aspects of our intervention of the natural environment. What has been your experience addressing this subject in terms of the balance between the artistic expression and the message you want to convey? 

Perhaps because I started out as a product designer, with the Connection Machine being what I consider my first artwork, I am always thinking of my audience and how to communicate with them. When I approach political or social issues, such as climate related problems, I know that the really shocking photographs (for instance, a dead bird whose stomach is full of plastic) give you an immediate emotional jolt, and make you realize that this is a serious problem. But I personally cannot look at those images day after day, time and time again. So, balancing my work as an artist with my desire to communicate, sometimes I think that I should be a journalist, so I could write articles that can go into the details in much more depth. But how often do you reread the same article? So I think that what is truly the value of an artist making work about a subject such as these is that the art work can be exhibited time and time again, in different places around the world. And people might see it again, they may be willing to look at it time and time again, but not if it is something horrible and shocking. I’m traumatized enough by what’s happening in the world, so I’d rather create something that is not traumatizing for people, but at the same time it makes you think.

“What is truly the value of an artist making work about a subject such as these is that the art work can be exhibited time and time again, in different places around the world.”

For instance, Unexpected Growth shows a very colorful, bright coral reef on the terrace of the Whitney. And when you look at it more closely, you realize this beautiful coral reef is made out of virtual plastic garbage. So people are confronted with something that is really beautiful, but after a while they realize that they are surrounded by garbage. So my strategy is to seduce people with a strong visual composition that is captivating. And then, when I’ve got their attention, I let them figure out that there is actually something else going on here, if you actually spend the time to look at it.

Video by Tamiko Thiel, Evolution of Fish – Anthropocene Daze #1, 2019. Created with the AR app Evolution of Fish (2019, with /p).