Patrick Tresset: “I thought that I could put back emotions using computers”

Share on facebook
Share on twitter
Share on linkedin
Share on email

Pau Waelder

Patrick Tresset is an artist who explores a form of mediated creation in which his drawing style is transferred to a set of robotic drawing machines or applied to video footage to create artworks that are curiously algorithmic and spontaneous at the same time. He is also the co-founder of alterHEN, an eco-friendly NFT platform and artist community whose artists have participated in a previous artcast on Niio. Tresset has also presented his series Human Study in a solo artcast launched recently.

I had the chance to interview him in his studio in Brussels on the occasion of my visit to the Art Brussels to discuss his work and the series that originated from an exhibition in Hong Kong that he had to remotely orchestrate during lockdown.

After working as a painter for fifteen years, you decided to study arts and computational technologies. What drove you to become interested in computer science and programming?

Well, actually, I was already interested in computing, because my dad gave me a computer when I was nine years old, and as a kid, I managed to do some little things, and I got fascinated by it. I particularly remember this possibility of creating little worlds that would be autonomous. I studied computing, but back then it was business computing. And after that, I decided to become a painter, move to London… I think I was a painter for thirteen years. And in the meantime, computing evolved a lot. So I always kept my eye on it, and after some time I got back into computing. So it was not new, computing. And I had this intuition that I could do something with it, because I knew I could program. I could imagine things. 

As a painter, I had a creative block. It just didn’t make sense to continue painting. And also I had lost my spontaneity, everything I did in painting looked stiff, and unemotional. I couldn’t do emotion. Strangely enough, I thought that I could put back emotions using computers. I was always into doing those very spontaneous drawings, and so as soon as I got back into programming, I worked on drawing faces, from the beginning, and then there was the internet. Thanks what I found online, I kept learning and I came across the Algorists: Roman Verotsko, Cohen… well, Cohen is not part of the Algorists, so Verotsko, essentially. And I saw they were using pen plotters. So I bought myself old pen plotters on eBay. And I started to do drawings like that. I wrote those out on my own for two or three years, using scientific libraries and other resources. But I felt that I was stuck, and I knew that I needed to go further to achieve what I was looking for.

You have mentioned that you transfer your drawing style to the robots. Can you elaborate on this mediated process?

When I was doing my Masters studies, I was working on simulated drawings, and it’s only during the doctoral studies (I started a PhD that I never finished) that I did proper research. It’s a risky thing in computing, but mainly, we’re learning drawing, psychology, perception and things like that… motor control, and all those things. I really researched a lot. And all that influenced the program. But also at this time, I understood that a drawing system needed to be embodied, particularly since I was interested in gestural drawing. So the way I did it was that I simulated different processes that interact, with parts dedicated to low level perception, then higher level motor control, and strategy. 

The style of the drawing has never been forced. The style is a consequence of the characteristics of the robot. If you just change little parameter on in, or on the camera, or the speed of the app, that will be enough to give the resulting drawing a different style. So it’s really an interaction between the body, the character and the characteristics of the robot. My input is in there in that the technique that they have is a technique I used when I was trying to draw. There is detachment in a certain way, but it’s not so detached, because I am in the system –I programmed everything myself. 

So there is this weird thing with control, because in the beginning I have control, but then when the robots start, I don’t have any control. And that leads to an interesting form of spontaneity. For me it’s always fresh, but the problem is, because it is using humans, not everybody’s a performer. A lot of people do it for the portrait, and then during the process, they notice that it is not just a machine that makes their portrait. Here I feel that there is the usual problem of entertainment and art. That does not happen with the still life drawings, because the whole system is encapsulated in itself. It’s a different type of storytelling.

For about a year, you have created a new type of artwork by applying the drawing program to video footage. What led you to use this technique? Particularly since you were just mentioned the embodied creation of the drawings.

It all came about because of NFTs. I needed something digital to sell, to mint. And it started like that. I did some experiments a few years back with video, so I already had some ideas but it really came to be through NFTs. I wrote a program to extract a big interface over the program I use for the robots, that enables me to play with and create these animations. It was by necessity. But in the end, I explore the same themes, only that now I know better what I’m exploring.

Let’s talk about the exhibition Human Study you had in Hong Kong, back in 2020. I find it interesting how it was developed under lockdown, and how the animations that you have now presented on Niio reflect that particular atmosphere.

Yes, it was a very interesting process. The exhibition was planned normally during Art Basel Hong Kong, but obviously it didn’t happen because of COVID. They moved it to November, but still they didn’t get the authorization to open the theater. So, it was decided to carry out the exhibition without an audience, using actors or anyone who was around, so sometimes it was the technical staff and not actors. To me it was particularly interesting because I helped select the actresses and the actors, so it became something like a piece of theater. I had created a generative system to edit the video feed from the cameras, so while I was doing everything from thousands of kilometers away, I became the director of a performance.

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on facebook
Share on twitter
Share on linkedin

YOU MIGHT ALSO LIKE:

Interview with Japanese artist Yusuke Shigeta in which he explains his interest in pixel art, traditional painting, and cultural influences in our globalized society
Interview with Canadian artist Stuart Ward about his work as MUEO, exploring the hybridization of Greek and Roman sculpture, Baroque architecture, and neon colors.