Jo Ho is half of Victor Frankenstein. Her machine is the other half. More specifically, the tech she uses—machine learning models and the software that rapidly crunches through and reinterprets her input data—to make her grotesque (her word) art. The art itself could, in a reductive analogy, be called the monster. But Ho cautions against fearing it, or its creator.
The advance of technology sometimes seems like an onslaught. The cogs turning behind concepts like AI, VR, and NFTs alike are not well understood by the general public, but Ho believes there’s something sublime in that mystery. And don’t worry; she doesn’t fully understand it, either. Her art is mind-blowing and head-scratching in turn, whether she’s imploding a pixelated landscape, building an alien world, or deep-frying her images into the realm of the uncanny valley.
But where Frankenstein was an anatomist, Ho is an architect by trade. Having gone to school to learn how to design buildings, she worked in the industry for eight years before tiring with the bounds of reality. It was like David Bowie said, “If you feel safe in the area you’re working in, you’re not working in the right area. Always go a little further into the water than you feel you’re capable of being in.” So, like Bowie, she went to Berlin. There, she started making art, and she hasn’t stopped since.
It wasn’t until the circuit breaker period, however, that she learned to collaborate with the machines she now works with. Sometimes that means messing around with coding apps, or projection mapping her ideas into reality. Sometimes she tinkers around with node-based coding, which obliterates the need to write lines of code in favour of a visual approach, where she connects boxes on her screen to create a flowchart that tells a machine what to do. That’s what she’s been doing for the past year, punctuated intermittently by bouts of video gaming—Dota 2 or Cyberpunk 2077, right now—for a much-needed break, or inspiration as to how to imagine and reimagine the future.
Last year, Ho was able to exhibit her work in two or three virtual shows. She’s looking ahead to the future, though. A new form of intangible art she’s been working with recently is creating a space filled with thick swathes of fog, which she then penetrates with sweeping beams of light and lasers. The morning of her interview with Vogue Singapore, she woke up to an email that her project proposal—creating this kind of light-through-the-fog art—had been shortlisted for showing in an international, experimental exhibition called Beyond Quantum Music. If accepted, her art will travel to Linz, Belgrade, and Hanover. So Ho was in a very good mood as she sat down with us to discuss chaos, digital spaces, and virtual impossibilities in her art.
Can you explain what you created for Vogue Singapore?
For a lot of my machine learning projects, like this one, I use this software called Runway ML. It’s like AI for people who can’t code. It’s incredibly accessible for people who come from a visual arts background. I’m not a coder; I wouldn’t be able to do this on my own.
So my data set—which is between 500 and a thousand versions of an image of my friend Khad, which I put through a bunch of different filters on Instagram. Khad’s face is like the “control” variable, which the machine then plays around with. [Scrolls] Some of these are hilarious.
So then I put it through a machine learning model on Runway ML. And what happens through this process is it learns the patterns of the images, through the framework of this network called StyleGAN [a generative model that automatically indexes and changes certain aspects of an image of the human face].
And from there, the model reproduces what it’s learned about the image. It looks at each image and compares what’s the same and what’s different between each. And then it plays around with those elements, to create an infinite number of new images—each one with a new “filter.” Some of it is so creative. [Laughs] Like, it’s really creepy.
Some of the photos are pretty macabre.
It’s creepy because we’re very sensitive to faces, as humans. And if the face changes, even a little bit, we pick up on it—something’s wrong. It’s this uncanny effect.
But it’s fascinating to see what the machine picks up on. Like, this one: a lot of Instagram filters have [animal] ears, right? So it picked that up and produced this weird little ear thing.
“I quite like the aspect of chaos in art, though. I have to; I can’t control what the machine is putting out. It teaches itself”
It’s interesting how you talk about the machine as if it were a collaborator.
Oh, yeah. When it’s a project like this, I call what I do “man-machine collaborations.” Because I’m always questioning, with the rise of artificial intelligence, being able to create. Do you know what I mean? Like, when does the machine-as-tool become machine-as-creator. There’s questions of creativity and agency that I’m really interested in.
Speaking of which, I wanted to ask how much control you feel like you have over a project like this, where the AI takes the data you give it and learns, and then basically runs wild.
Some of the filters are incredibly chaotic. I quite like the aspect of chaos in art, though. I have to; I can’t control what the machine is putting out. It teaches itself. I can only control the raw data that goes in. It’s not like architecture. I left architecture because I was ready to practise space-making in a different way. A more digital way: projection mapping, AR, even VR.
How did you learn to use those kinds of tools in your art?
I moved to Berlin in 2018. I did this amazing two-year programme, a MA in Media Spaces at the University of Europe for Applied Sciences. It’s not very big, and the programme is quite young. 10 or 12 people in my class. But I think within the span of a year and half, I learned 10 completely new programmes or something crazy like that. It was really eye-opening. I was so inspired, and it unlocked so many possibilities for me to play with our perception of space.
You were living in Philadelphia before that. To go from Singapore to Philly to Berlin—those are big changes.
I know. But I’m very mobile, you know. I like to move around and I like to learn new things, to be in new environments. Like I said, a little bit of chaos is good. When you remove yourself from what you’re comfortable with, and then you place yourself somewhere else, somewhere totally foreign.
That’s when you’re in the right place to do some really exciting work, right?
Since I came back to Singapore last year, after 20 years of not living here, I’ve been exploring some cool stuff.
Like how I want art to make you feel. People always ask me, “What’s your message?” I think an interviewer asked me that recently. And I was like, “Honestly? I just want you there, experiencing it. It’s more of an immersive, abstract experience than an explicit message. I want my work to evoke a feeling or an atmosphere. Or be a space where people can just sit and watch—meditate, contemplate, whatever. It’s nice when people are entranced by the visuals. I like to make things that are really, as I always say, “visually tasty.”
Like the work I did at the National Gallery in January. The massive installation projected up onto the facade of the building, which I called (RE)ROOTING. I did the visuals, and my friend W. Y. Huang did the sound.
And I was there contemplating it along with everybody else. I was there all the time because I couldn’t get used to the scale of it. It was so large. And it was up for ten days. It was about memory—the memory you have of a place, recalling it, and then reforming it when you visit that place again. It had a lot to do with my returning to Singapore.
“It’s more of an immersive, abstract experience than an explicit message. I want my work to evoke a feeling or an atmosphere”
So you flew back to Singapore at the onset of the pandemic.
So quickly that I left everything in Berlin. During circuit breaker, I had nothing. I didn’t have my PC. I didn’t have a projector. I didn’t have anything to play around with. Because a lot of my work was quite physical, right? Large-scale installations. And the stuff I was doing digitally was very heavy, computationally speaking. And I didn’t have my PC, just my little Mac laptop that couldn’t even render a five-minute video in After Effects.
And it was at that point where I started learning how to do machine learning. In school, it had been such a hot topic. Everyone was like: “Oh, machine learning this, machine learning that.” And at the time, I was like, “What’s the point? Why are you guys messing around with this?” No one could answer me. I wasn’t convinced by making art with machines. But I got curious, I guess. So I tried it out for the first time here and I was kind of blown away by the results. I’m quite a composed person, so a little chaos really balances me out.
And unreality, to balance out your reality?
Oh, for sure. I like being presented with a wall, or a facade, or a page, and then really exploring the boundaries of the space. Breaking it apart, extending the space beyond itself, making the surface permeable. Transforming it visually.
All stuff that’s not physically possible in architecture.
No, it’s quite magical. I like the magic of it. I still love architecture, but what I’m doing here, it’s literally creating new, multilayered realities.
How was it, speaking of which, to go from being an architect to an artist?
My transition from designer to artist is still happening. I have to remind myself to actively dismantle the “design mindset” because I’m always designing. And when I design, I’m thinking about designing for other people. In architecture or design, it’s always: what does your client want? Will it serve a function?
So I’m trying to shift my attention to things that I want to do. The questions that I want to answer, in terms of how I view the world. This is quite an exciting time for me. I’m sort of flailing about, but—honestly—I’m excited about it. I like the mystery. I think it’s healthy not to know everything.
That must come in handy when you’re troubleshooting tech issues with the machine.
I’m very good at troubleshooting. But when my tech fails, that’s when I really, really feel the tangibility of the computer system that I’m using. Like, for example, if my computer crashes a lot. Then what I’m dealing with, which seemed intangible because it’s, what, binary code and pixels on the screen and essentially light, becomes tangible. I feel the weight of my PC. And I’m, like, “Wow.” It seems like it’s magic, but it’s not.
Tech doesn’t have all of the answers.
And that’s also why I think it’s important for me to not just show what the machine has made. It’s important for me to intervene or collaborate with the machine. Like, this question of AI and tech taking over, that it’s evil, that it’s going to take your job and your wife and your kids. My position is that AI is a child, learning. So who’s teaching it? When I train my models, I know what my intentions are. So it’s not inherently a “bad” process. It just depends on the human behind the tech.
So in terms of intentions, what are you working through with this art you’ve created for us?
It goes back to this idea of what makes up the context of an image, and what that context makes you feel, and why it makes you feel that way. If you look at an image and it brings up a certain emotion, and you’re not sure where that feeling came from. Because you don’t understand your own mind, not fully. Just like how we—or I, at least—don’t fully understand the machine.
I’m still working through how I feel about machine learning. At the heart of it, I’m more concerned about the human experience of forming memories and forming patterns. [Pause] But I explore that through the use of a machine.
And does that make the art less human?
I think it emphasises the humanness.
I’m not quite sure yet. Can I say that? [Laughs] It’s just a feeling. I mean, yeah, these are machine-learned images, but I’m the one collaging them together at the end of the process. That, to me, brings in more of a human touch. I’m not really sure what that means. That’s why I’m still working with machine learning, though. Because I haven’t found the answer to what I’m looking for. Not yet.