Argentinian filmmaker Eduardo Williams’ new film The Human Surge 3, which premiered this summer in competition at the Locarno Film Festival, is his first feature since The Human Surge. But there’s no missing second part, nor is the film a sequel, at least in the sense that it doesn’t rely on having seen the first film — nor continue its plot. What the two films share is an international set of performers and shooting locations, though the characters in the new film move much more freely between settings. The film also shares a shooting style with Williams’ 2019 short film Parsi, which was recently available to stream on Le Cinema Club. Both films were shot with 360° cameras and then framed in virtual reality by Williams after he cut the film together. The result is one of the most energizing two hours of the year, featuring some of the most novel images in recent memory. We spoke to Williams during the New York Film Festival, where The Human Surge 3 had its U.S. premiere.
When you made the first film, The Human Surge, did you start out thinking about that as a project that might continue over multiple films? Or was there something afterward that made it feel like it was a project you wanted to continue?
I mean, when I did The Human Surge, I didn’t think about having another film with the same title. So, it wasn’t a project in that sense. But at the same time, I see all my films as quite continuous, you know, and as the same project, they are very connected. So for me, it’s not a big surprise. But then, when I was thinking about what film to do after, I had the idea of naming it The Human Surge 3 because I realized that many of the ideas were very continuous. I also thought it was interesting to also go a bit against [the idea of] the notion of a single human surge; [if you have a first human surge and a third human surge,] there seem to be many possible human surges, let’s say.
What were the specific ideas carrying over from the first film that led you to want to use the name again?
I think a very simple and basic thing was the idea of making a film in several countries, in different continents. So this idea of connecting countries and people that I don’t see connecting so often, that was the basic thing. And then probably similar things that I was thinking about naturally, about my life and about many people’s life, like moving, transporting, using technology, how that changes our way of seeing everything; our relation with nature, and with everything that’s nonhuman, and other things like friendship and work. So I think in this one, I wanted to say, “okay, we have the same worries about the working world,” but also I wanted to go a bit further into the fantasy — or at least try to. So that was what I saw in continuation and evolution from the other one.
Coming from a math background, when we’re mapping three-dimensional space in terms of rotation [which is to say, in the context of film, panning,] we can do it as a sphere or as a cylinder. The cylinder is nice in some ways because it’s closer to how we map two-dimensional space rotationally; we’re just adding in the up and down movement, but you have the rough edges in the up and down directions. The sphere then solves that problem in that it’s smooth all over, but it can be harder to conceptualize [or, as the three-dimensional space of this film captured by the 360-degree camera and then represented digitally, to program.] There are moments in this film where you do include that area directly above and below the camera, which I’d imagine is not designed so much to capture, so I wonder when you were doing the framing in VR if you felt more like you were looking around a sphere or a cylinder.
I think in my experience I felt more in a sphere than in a cylinder, though it’s true that when I take the sphere to the screen, maybe it turns a bit more into an open cylinder. And then a sphere, I don’t know. But it’s true that what the camera creates is a sphere. So we see the stitches and lines when we look in front, but when we look up and down we see like a flower or a star, you know, where the eight cameras get together. So yeah, it was less smooth. But as you see in the film, I really liked that. So I tried to look there on purpose [at times,] and also I was thinking about what types of images to put there.
We have a very important moment in the film where the camera spins. So there was also a circular energy that came to me very intuitively, like this desire of looking around that came because I could do it with this camera. And then thinking about it, I think it’s an important thing to have as the film advances; we started looking around more. And then in the spinning thing, we try to leave or break the image, let’s say, through [stitching]; I really liked this star or flower that was formed [through the stitching], although I think I could have tried to make the stitching there better. I don’t know if I could have done it totally invisible, but I didn’t even try because I really liked to see the seams in there.
And then just before this, we have a very short segment of someone sleeping, and I put the camera on top of the person so the face would be in the stitching point below, where the eight cameras come together. And I realized when I saw this in my iPad, which is the direct streaming image, that we saw three faces. And when I stitched it afterwards, in the computer, this was lost. So I kept this preview image, which is the worst quality that you can get, because I really liked the three-faced living person that I would have lost if I made a better stitching. And at another moment, near the beginning, we have one face that is right in the stitching of two cameras, which is not below or under. And I knew that if you put the face very close to the camera, the camera couldn’t stitch very well. So we lost part of the face. And I really like this feeling of people blending into the digital image. So I used this, and in the end I did it, but artificially, so I generated the same deformation in the face of another actress. And in the moment I made the other actors react to this because I wanted to have this feeling of, okay, at the beginning we think this is just a glitch in the camera. And then we realize the actors, or the people we see, can see it. You know, we thought they couldn’t, but now we see they can.
I have a couple of questions about the framing of the film and the post-production in VR. First, specifically in the scene you were talking about with the spinning, did you have to stand in the middle of the room and spin in a circle? Or is there just a button that you press and the image spins? And then more generally, how much time did having to do the framing, as well as the editing, during post-production add to the process?
The spinning I made on the computer first, while I edited the two hours [of the film]. But it’s super fast, I couldn’t have spun so fast, so I added movement — I had the spinning already happening in the VR, but I added movement to that, slower movement, but still. I wanted to try to connect this computer movement to a more organic one. It’s a bit subtle, but I think it added something.
And then how much time… I don’t know exactly how much time because really, I was doing everything at the same time. So while I was editing the sound, I was still framing. Also, after each shooting, I edited without the VR first because I couldn’t solve the problem of how to record my movement until the end. I mean, for Parsi [Williams’ previous film shot with a 360° camera] I could do it myself, but it was with a GoPro. So the image was easier for me [to work with] on my normal computer, and I just screen-captured the computer. But here I didn’t want to lose so much quality, so it was difficult to handle it on my own. So afterwards, we had a fund and we went to India to a post-production company called Media.Monks. And they made the system in Unreal, which recorded my movement and gave me keyframes that I could put in the good-quality image (because in the headset, you really cannot see these images in good quality). So I don’t know how much time it added; I would say maybe a month. I mean, I went for a week to India. And there they gave me the system in Unreal on a computer, and I could travel with that while I was doing the other post-production parts. It was very intense, working every day all day for almost a month: waking up, [working on the] computer until I fell asleep, and then again. It was also intense because I always try to do post-production a bit fast. I like to feel a bit of pressure about time.
Something that I noticed both watching The Human Surge 3 and Parsi is this: when I’m watching a movie with subtitles — whether it’s a movie in English where I just happen to have them on or a movie not in English where I need them to understand the dialogue, or when there’s a lot of overlapping dialogue (like in your films) — I’ll often see the subtitles pick out pieces of that dialogue so that it’s a reasonable amount to read. And I really like that the subtitles to your films embrace the chaos of the overlapping dialogue, such that all of the dialogue being spoken in the scene is on the screen for you to see, even if it’s more than most people could read. Do you take a really active role in doing the subtitling?
Yes, absolutely. For me, these moments are super important. As we know, translation can be very different. I mean, even if it’s the “correct” translation, you always have options. And sometimes for subtitles, you need to be more concrete [about your decisions]; maybe you need to be shorter if you can while maintaining the same meaning. So it’s important for me to choose in which way [the dialogue is translated,] at least for the English subtitles; then for the other languages that I don’t know, I cannot. It was a very special thing to think about. At the beginning, I translated everything. But even if I liked the overlap there was too much. I mean, one of the reasons for doing that was that I thought if the spectators feel it’s too much to read then maybe I would invite them to stop reading for a while. I did the same thing in Parsi, in the moments where we have the poem and the dialogues of the people we see, but we can physically not read everything. So I was trying to say this here as well. Of course, some people still read all of it; it depends on your personality, how much you can not care about it or if you feel you really need to read. But I felt I had to reduce it a little bit, because it was maybe too much. I mean, I tried for it to be too much, but not so much, I don’t know [*laughs*]. So if you understand one of the languages, you can get a bit more, but it doesn’t change radically. I still try to keep, like, the general meaning of the conversations they were having. I just maybe took one or two phrases out to make it a bit less crazy.
Yeah, and I think it works really well at emulating what the experience would be like if you understood one or more of the languages being spoken, and while at a certain point you pick up some of the meaning, the actual sound of the combined voices becomes just as important.
Yeah, for me too.
I think with these emerging technologies like VR and 360° cameras we’ve seen a lot of technical demos, but there haven’t been a lot of users with the express goal of making cinema or art. Are there more ideas with these technologies you’d be interested in continuing to explore, or other new technologies you’re interested in employing?
I mean, as you say, the most important thing for me about using these technologies was how I used them and why. And there were a lot of reasons with this film, but one main reason relates very much to a basic question we have in cinema, which is how to frame — how we decide the frame of — a film. So in this case, I was moving the moment of the decision from the shooting to post-production. And also changing the state of mind [I was in]; you’re not in the same state of mind during shooting as in post-production, sitting alone in a room with time dedicated only to watching the images. So for me that’s the important thing, it’s not about technology so much as just how you use tools to think in a different way.
But going forward, I’m not really sure. I have an idea — a very loose idea, I’m not exactly sure what I want to do next — but it doesn’t relate so much to a new technology as much as maybe a type of lens I’d like to use. In my mind, my ideal would be going back and forth, maybe using film and thinking if I’d use it in a different way now that I have all this experience working in virtual reality. In The Human Surge I used 16mm film as well as a small and a big video camera, and for me the idea was how would I use film with the mind of someone who watches YouTube videos as well as conventional films. In my mind that’s the question: how do we use both old and new technologies in the present?
In my mind I won’t use VR and a 360° camera next, although maybe something happens and I change my mind. With Parsi I didn’t know I wanted to use VR, I just knew I needed a small cheap camera, and I started googling what are the small cheap cameras I could get, and then I saw the GoPro 360, and I wanted to use it because it was a good way of being able to give the camera to the actors without them having to think about framing. So the next time I’m working on a new project, I’ll probably also look on the Internet to see what I might be able to use.
Have you gotten any opportunity since you started working with the 360° cameras to do more traditional work and see how your approach to that has changed?
Not really. I made one installation for a museum called A Very Long .GIF, but that’s with a pill camera which isn’t traditional at all, though it’s also with a tele lens, which is more traditional — I like that combination of looking in our bodies and looking very far, within the same video. I’m not sure if it’ll really change or not, but it’s a curiosity I have.
Comments are closed.