Write the Software and Let the World Have It
Forty Years of Internet Performance
Miller Puckette is the author of both Max and Pure Data (“Pd”), visual programming languages for artists which, over the past three decades, have helped usher in a new generation of artist-technologists. Onyx Ashanti is a “cyborg musician,” maker, open-source advocate, and the creator of Beatjazz who uses Pure Data as one of the core technologies in their creative practice. The following conversation between Puckette and Ashanti covers a wide range of topics, including the early days of internet music performance, the changing landscape of multimedia technology in the arts, and how technologically mediated performance practices have taken on new relevance in the post-COVID era.
Early Internet Performance
Miller Puckette: The first experience I had with internet performance was in ’97, working with Rand Steiger and Vibeke Sorensen, trying to do a network—or over-the-internet—performance between Thessaloniki, Greece and San Diego, California, during an international computer music conference. The three of us and musicians George Lewis and Steven Schick flew to Greece and the connection didn’t work. Thessaloniki was a wonderful time but there was no internet performance. We went back about a year later and got one to work, but it was so much trouble at that time to do anything that had more than one site in it—it would have been much easier to climb into a plane and do a performance in person than it would have been to do it on a screen.
That’s been true until very recently. And, in fact, the only reason it isn’t true now is cause you can’t hop on planes right now. All of a sudden a lot of other people and I have taken an interest in internet performance or internet art in general, which we hadn’t for a while. I had my fill in ’98 and didn’t want to go back, and I’m discovering now it’s not as awful as it was then! I’m pushing against the boundaries of what I can do over the network all the time, but on the other hand I’m doing things now that I wouldn’t be able to do otherwise. I’m accepting the limitations and trying to learn how to work with them.
Onyx Ashanti: Even right now this format has kind of been laid over the top of these global networks. Everything about the experience of using a computer is still flat, everything uses these windows, but then we also have high-speed processes that allow for these windows to actually be functional. I have been seeing some jazz musicians online using JackTrip, the free program for sending and receiving high-quality audio over the internet—they’ve been using it because they need to jam now and they can’t. Before, that was just one nerdy thing too far for some jazz musicians, but I’ve been seeing a lot more interest in low-latency network protocols—ways of sending and receiving data over the internet with the smallest possible delay—now.
Audiences in Online Performance Spaces
Miller: I don’t know how to connect with an audience effectively over a network. Of course you can always be a TV station and just send the signal out and whoever’s checking into it will check into it, and you’re not going to know who or where they are. But you don’t want the TV feel, you want the audience feel, whatever that is.
I’m hoping that while we are deprived of the normal ways of functioning, we can pick up a whole bunch of new skills, which is of course what we’re being forced to do. These new skills will be added on top of all the things we knew how to do beforehand, and I’m looking forward to the layering that’s going to happen when combined with real live audiences. I haven’t thought through how a performance might fully take advantage of that, but we’re going to have a lot of chops coming out of this pandemic that we ought to use for something.
Onyx: This is interesting. For instance, if there are four participants in a Zoom room, the only way you know that is if you look at the little logo at the bottom, which says four participants. I’ve been in one of these where you see a couple of hundred names, which is very disconcerting... Should there be a little bit of live audience noise for every single participant? Because in some real-life places there are mosh pits, in some places there are 3D projections. And maybe if it’s a good party, there’s both.
I’ve spent a lot more time thinking about how worthless an American passport is at this exact moment. It’s been the thing that’s allowed me to go and do shit everywhere for thirty years and now it’s completely worthless. I could say I’m trapped in the United States. Looking ahead, there may be no clubs open, there may be no dance parties. How do people get together musically and responsibly? The idea of an audience has shifted. This window is the whole world now.
Zoom as a Performance Medium
Miller: You gave me a horrible idea I want to throw out: What if you set up in an art gallery and just had a whole bunch of laptops, or something like that, and mount them on the walls or on desks. They would be your audience and your job would be to go in there and play saxophone or whatever it is, a real instrument. The laptops would all act like audience members, so they would give you the feedback. And they would actually be trying to act like an audience, they’d be trying to figure out when the music was over so they could start clapping. Wouldn’t that be horrible?
Onyx: That would be probably perfect, actually. We’d get all these old laptops that nobody ever uses anymore and maybe run OpenCV, the open-source computer vision and machine learning library, on them. I once saw a clip of a football game and they had cardboard cutouts.
Miller: On the seats?
Onyx: Yeah. Online, our audiences could probably be way more interesting than cardboard cutouts.
I’m pushing against the boundaries of what I can do over the network all the time, but on the other hand I’m doing things now that I wouldn’t be able to do otherwise.
Improvising in Parallel
Miller: I wonder if there’s maybe an artistic use for chat boxes as well. Not among audience members, but among players. When you’re in a group improvising, if there are more than two people, suddenly something wonderful starts to happen but then the thing veers off in a different direction and you can’t explore it, right? It’s part of the game, so you learn how to drop things, even if they sounded like they were going to be wonderful. But if there was some kind of chat box-y way to put in sideband ideas parallel to the main group conversation, you could actually have a more layered musical conversation. You’d need a different technology from a chat box to do it, you’d need the musicians to be able to cut their own instrument out of the audio feed and stick in the chat box.
You might end up with interesting music coming out of that—there wouldn’t be any one music, because what any one person was doing in any one place would be interacting in real time with some of their peers, but would also be interacting with things that others of their peers had put down in the past, which they would want to go back and look at and play over.
Onyx: That would be like a participatory livestream, like this Zoom call: something happens and you can go back to the beginning and add your piece to it.
Miller: Meanwhile, you can see other people doing other things at the same time. And then of course you choose whether you want to listen in on what they’re doing or not. You have to be able to quickly listen in and consider, “Am I going to interact with that particular musical stream or am I just going to stick with this thing that happened a minute ago?”
Onyx: That could be actionable now. I could see these parallel musical conversations being a shift that we don’t come back from. When stages happen again, there’ll be a lot of emphasis on recreating the things that will probably pop out in the space rather than us going back to normal. Maybe.
There may be no clubs open, there may be no dance parties. How do people get together musically and responsibly? The idea of an audience has shifted. This window is the whole world now.
The History of Pd (Pure Data)
Onyx: You’ve been developing Pure Data for thirty years now, right?
Miller: Yeah. I started working on real-time music systems—software that immediately responds to musical events like notes played—in ’81. It’s pushing forty years, but it’s only been about thirty since I’ve had anything useful to show for it.
Onyx: The improvement of real-time music systems is exponential.
Miller: It’s like a mudslide, it gets bigger, bigger, and maybe a little bit faster, but mostly just more massive.
Onyx: What was it like to be working on real-time software at a time when such a thing was, I would imagine, an inside joke in a sense?
Miller: You had to go to centers of privilege to get anywhere close to it at the time. I was at MIT at the experimental music studio, and then I was at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris, and I was working on machines of which there were five in the world. Gradually I was just pushing, pushing, pushing, trying to squeeze the software down into a more workable package. It took until about 2000 before it became really democratic, where a reasonable number of people could buy a machine that could do high-quality audio.
I looked at that situation and said, “Okay, I know this machine costs $200,000 now, but if I can make that computer sing, then I’ll have a $100 machine that will sing in twenty years."
The way to spread a thing quickly and efficiently, and also to the widest possible population, was simply to make it open-source.
Onyx: There was the shift towards proprietary software in the nineties. To maintain your own version of Pure Data, what was going into the open-source space like?
Miller: It was really more about freedom than anything else, especially from commercial pressures. I noticed with commercial software that a whole lot of energy went into preventing people from obtaining it, so that the companies could squeeze money out of them before they got it. There was this artificial scarcity thing going on. As a researcher, I want for as many people to be able to get a thing as fast as possible.
The way to spread a thing quickly and efficiently, and also to the widest possible population, was simply to make it open-source. That way, you don’t have to do 90 percent of the work, which is wrapping the sucker up, and you can just do 10 percent of the work, which is the kernel. That appealed a hell of a lot to me—writing the software and letting the world have it, as opposed to trying to squeeze money out of anyone.
People who would potentially be programmers have given away a lot of their autonomy to pre-designed systems and the idea of ease of use. It’s not that the other thing is harder, it’s just that it’s slightly more abstract.
The Future of Pure Data and Technology in the Arts
Miller: I’ve watched you perform a couple of times and I get a very strong sense of embodiment. You’re the performer who is the most in your own body as you play your music, almost to the point that I’m watching a dancer as opposed to a musician. You’ve found a way to bring the music creation process onto your body in a way that feels genuine and yet is extremely intimate.
Onyx: Thank you. I owe all of it to the design of Pure Data. It actually inspires my own process, that maybe one day I can make things that are as functionally useful for as long of a time as this has been for me.
Miller: A thing I keep trying to do and can never get to is finding a bridge between this extremely reactive way of programming and ways of storing and looking at and searching through data. Pure Data is very good for making a reactive kind of musical instrument, but it’s actually a terrible score development medium. And things that are good score development media tend to be rotten performance tools. There’s a good reason for that, because the score—the things that manipulate sequences like digital audio workstations (DAWs)— are about making a document, and Pure Data is all about making an action in time.
Onyx: You know, the Raspberry Pi Zero, a tiny, $5 computer widely used in DIY electronics projects, was a single core, one gigahertz processor. And there’s another small, inexpensive computer called the Banana Pi—a quad core processor the same size as a Pi Zero. So there’s the Pi Zero. There’s also a quad core version made by another company (Banana Pi M2 Zero), which is closer to a Pi 3—a still small but more capable version of the Pi Zero—than a Pi Zero. It’s way more powerful. That was three years ago. These things are going to get smaller and smaller. They’re already becoming a kind of dust. It’s like, what are you computing?
My practice is to take all of this effort that goes into training a machine-learning algorithm and machine-learning logic and find ways to imprint that information onto myself somehow. I feel like spatialized 3D sound is that thing we’ll remember in a way that’s even more profound than visual.
This is the opening to the portal I’m looking at. I’ve been calling it a symbiote. In fact, I just got rid of the EEG I had—the electroencephalograph device that can be used to measure brainwave activity. I realized, Why do I even have an EEG when I don’t understand the information that’s coming out of it? I understand it to a degree, but I didn’t create that EEG, so I don’t know what the noise reduction is, or the weird shit they’re doing to the information. I don’t even know if the information coming off of it is actually my brain activity.
I think about the future of all of these things. The thing that’s beautiful about Pure Data and GeM—Graphics Environment for Multimedia, a plugin for Pure Data that adds graphical processing functionality—together is that GeM has motion tracking, it’s got binaural processing in the extras package. And then GeM itself is 3D capable. I had been looking at things like Godot, which is a game engine; Unity, of course; and others to build immersive spatial logics, but I realized I don’t know enough to ask functional questions in those logics. I wanted to get back into studying digital signal processing and its interactions... Actually understanding it.
I am a storyteller—a street performer—and nobody’s on the streets. So now there’s this new medium.
As far as futures go, that’s the one I see, because Pure Data to me implies not just participation, but programmatic intent. Even if you don’t get into the programmatic immediately. With Pure Data, all the help files are patches. For two years, I didn’t program anything. I just opened help files and unloaded them and changed things around, and then eventually I was like, “Okay, I really need to do some programming,” and found that I actually could.
Finding Community in Non-Commercialized Digital Spaces
Onyx: I’ve been regressing from a lot of preexisting community spaces. There’s a thing that
I think many people who would potentially be programmers have given away a lot of their autonomy to pre-designed systems and the idea of ease of use. It’s not that the other thing is harder, it’s just that it’s slightly more abstract. I’m very interested in contributing to the conversation in relation to Pure Data, especially around making a sonic interface for Pure Data that you can actually do dataflow programming in, but with no visual interface.
Miller: That’s going to be hard.
Onyx: I imagine all kinds of obtuse ways of doing it, but something will pop out. Just being consistent in the questions that are asked in that direction, there’s going to be an extremely simple way to do it. My goal is for it to happen and take less than 30 percent of the CPU of a Pi Zero.
Miller: All right. Game on.