fbpx Will Future Storytelling Include Live Theatre? | HowlRound Theatre Commons

Will Future Storytelling Include Live Theatre?

Will the future of storytelling include what we now consider theatre?

That’s what I was wondering when I took the Staten Island Ferry to attend this year’s Future of StoryTelling Festival in New York, an annual TED-like long weekend with hundreds of exhibits, performances, panels, and talks largely focusing on the intersection of art and cutting-edge technology.

FoST Fest 2017 was being held at the Snug Harbor Cultural Center, eighty-three acres along the north shore of Staten Island, which was built a century ago as a home for retired sailors. On this particular Sunday morning, its origins seemed especially apt: It was pouring out.

The first show I attended (the only one available when I arrived) was entitled The Cloud Machine, created by two Swedish artists, one a writer and director, the other a composer and sound artist. In a small, darkened theatre, each of the audience members was given an individual sensor to hold, and the show consisted of our watching a cloud forming over the stage, shaped (we were told) by our collective “mental state.” I was surprised the cloud was not darker, given my mental state. I sneaked out after half an hour, having spent an earlier half hour looking at actual clouds from the ferry. An hour of cloud gazing is plenty.

Amid all the talk of VR (Virtual Reality) and AR (Augmented Reality) from technologists and neuroscientists… there was actual theatre from theatre artists at the festival…Some of it was thrilling.

Was this theatre? The artists didn’t claim it to be; they called it an “interactive installation.”

Eventually the skies cleared and I found—amid all the talk of VR (Virtual Reality) and AR (Augmented Reality) from technologists and neuroscientists—that there was actual theatre from theatre artists at the festival (although the festival organizers seemed troubled labeling any of it “theatre.”) Some of it was thrilling.

One exhibit provided details of last year’s experimental production of The Tempest by the Royal Shakespeare Company. Looking for a special way to mark the 400th anniversary of Shakespeare's death, the RSC created a production of the Bard’s last play using motion-capture technology—the same technology employed by such movies as Avatar and Beneath the Planet of the Apes. In the movies, an actor’s performance is recorded while he or she wears sensors, and then animators program the raw footage to transform the characters into aliens or apes. The aim is to make the movement and expressions of the creatures seem more lifelike/human. In the RSC production, theatregoers saw the live actors and the animation, simultaneously. The RSC presented the character Ariel as both the performer Mark Quartley and a projection of his monstrous avatar flying above the stage. The technology allowed for a range of sophisticated projection design, often using as a screen a huge cylinder that the designers called “the cloud.” Theatregoers saw a banquet table laden with a sumptuous feast—which was comprised entirely of light rather than food.

All of this took a couple of years to prepare, working in collaboration with Intel and a digital-oriented production company called the Imaginarium, and employing stacks of new equipment, some of which was on display at the festival. Each performer wore a Lycra suit beneath their costume with sixteen motion sensors; there were twenty-seven high definition projectors.

performer in lycra suit
The Tempest. Photo courtesy of the production.

“We were able to make Ariel fly in ways he couldn’t before,” Sarah Ellis, RSC’s director of digital development, explained to me at the festival, “but this is just a continuation of what we’ve always done”—a twenty-first century way of presenting Prospero’s “rough magic.”

Will RSC start doing these “ways” routinely?

“There is great potential in incorporating these technologies in the future,” she said. “We will see them backstage and on stage in varying levels and scales.” But exactly what levels and how great a scale will be determined “by the direction and ambition of the creatives”—not, in other words, by fiat, or by techies.

A French company called DV offered an actual performance of its version of Alice in Wonderland to one theatregoer at a time inside a white tent on the grounds. The individual theatregoer entered into the darkened tent, put on VR goggles and a headset and saw the performer (just one actor), first as the rabbit, then as Humpty Dumpty, then as the caterpillar, against five different vertiginous backgrounds. “There’s a loose script,” the actor Robin Berry explained to me afterwards, but what happens from moment to moment depends on how the audience member reacts to the characters and the objects that (virtually) float before the theatregoer’s eyes—and to her mission. It soon becomes apparent that the theatregoer is Alice, and her mission is to get hold of the crown before the clock strikes. “It’s very improvisational,” Berry said.

white rabbit from Alice, The Virtual Reality Play
Alice, the Virtual Reality Play. Photo courtesy of the production.

Alice, The Virtual Reality Play, as it’s officially entitled, is an example of what the technologists like to call MR—Mixed Reality. (This is also what they called RSC’s The Tempest.) Alice was previously performed at the Cannes and Venice film festivals. The plan, according to Antoine Cardon, DV’s Innovation Director, is to expand the show to accommodate two dozen theatregoers at once—they’re reluctant to provide details—for example, would all of the audience be Alice, or just one?—and bring the production to New York next Spring. While in New York, they are scouting locations in Brooklyn, perhaps an empty warehouse that they will convert into a theatre.

(I should mention another Alice in Wonderland show at the festival, Holojam in Wonderland, which I didn’t get to see. The festival called it Mixed Reality in the program guide; on their website, the creators at NYU’s Future Reality Lab call it “the first multi-audience, multi-performer live action theatrical performance completely in virtual reality.”)

Riot uses facial recognition technology to assess a theatregoer’s mood. If he or she is calm, the video advances….If the camera picks up on any other emotion, the officer pummels him or her to the ground (virtually speaking), and the narrative ends.

There was a “Keep Out” sign on the black front door of a nineteenth century cottage on the Snug Harbor grounds, where festival attendees waited on the porch for a twenty-first century experience. The door opened, a police officer in a riot helmet ushered just one person into a room that had what looked like a surveillance camera above a screen. On the screen there was a riot going on, a building billowing smoke, and riot police dashing about.

The riot turned out to be actual footage shot in Washington, DC, during Donald Trump’s presidential inauguration, one of the additions of Riot 2.0, director Karen Palmer’s update of Riot, a show she had presented at FoST Fest 2016. “I was inspired by the Ferguson riots,” she told me then. The scene switched from the inauguration to a closeup shot of a riot police officer approaching the theatregoer standing before him.

“Get out of my face,” he screamed.

What happened next depended on whether the theatregoer remained calm, or expressed anger, fear, or agitation. Riot uses facial recognition technology to assess a theatregoer’s mood. If he or she is calm, the video advances through three levels. If the camera picks up on any other emotion, the officer pummels him or her to the ground (virtually speaking), and the narrative ends.

A cop in riot gear handcuffing a woman
Riot 2.0. Photo courtesy of the production.

Developed in partnership with the National Theatre’s Immersive Storytelling Studio, Riot has been exhibited at various museums and festivals. FoST Fest labeled it an interactive installation, but Palmer says that once she comes out with Riot 3.0 next year, its final iteration, the National Theatre is considering plans to present it in their complex on the South Bank of London. At that point, maybe everybody will stop calling Riot an installation and start calling it…theatre.

Festival organizers seemed to acknowledge (albeit perhaps grudgingly) that not all forward-looking theatre is rooted in technological advances. The festival presented performances of what they labeled as “an immersive theatre project”: Siobhan O’Loughlin’s acclaimed solo play, Broken Bone Bathtub, which was inspired by her personal experience after she broke her arm bicycling, and takes place in the very low-tech confines of a bathtub. The (shower) curtain rises and O’Loughlin, her arm in a cast, tells her story of brokenness and healing, and invites the audience members to help her bathe and share their own stories of brokenness and healing. In a panel entitled “All the World Is a Stage,” Hector Harkness, the associate director of Punchdrunk Theatre Company, creators of Sleep No More, spoke about the appeal of his show, a version of Macbeth that has been labeled immersive, though not by Punchdrunk. Set up on several floors of an old hotel, theatregoers follow scenes and characters at will, which, Harkness said, “give the audience a sense of empowerment, the illusion that you can change the course of what’s happening.”

Another panelist, Justin Bolognino, who has the very techie title of chief experience officer at Meta, which “creates multi-sensory live experiences,” even seemed to say that technology is not the most important ingredient for a successful cutting edge storytelling project: “Emotion needs to be the goal. Technology is just the means.”

During his talk, Bolognino described a Meta project entitled Right Passage, which debuted at the Panorama Music Festival at New York’s Randall Island last summer. Whether or not the “magico-religious dynamistic” show (with its “assault of cold beams, strobes, and intimidating moving walls”) could be considered theatre, one thing Bolognino said about it made me perk up: Right Passage had a purpose of which its audience was not aware—it was essentially a way for them to wait on line for another show, a Virtual Reality film called The Ark.

Waiting in line is a basic element of most conventional theatregoing, for better or worse (mostly for worse), and Meta’s experiment was one of several examples at FoST Fest that struck me as having potential applications to live theatre as we currently know it. Subpac created Physical Hearing, an “interactive audio” experience that the listeners can feel, making music available to the hearing impaired. With New Dimensions in Testimony, you could ask Holocaust survivor Pinchas Gutter—or in reality just his image—any of literally thousands of questions, and he will answer as if he’s a live person actually present. In this project of the USC Shoah Foundation, in coordination with USC’s Institute for Creative Technologies, staffers relentlessly interviewed the actual Mr. Gutter, recorded his answers, then created an algorithm that can recognize a spoken question and quickly sort through his answers to find the most appropriate. Poets in Unexpected Places, which organize guerilla poetry readings in venues like a subway car, seemed just a millimeter ahead of the curve in site-specific theatre. Several of the exhibits within the tent called “FoST For Good,” presenting projects that use storytelling to “evoke empathy, promote awareness, and spur action,” seemed to suggest what could be a you-are-there next generation of theatrical projection design.

After the festival, I contacted some of the storytelling futurists I had heard, including Bolognino, who goes by JB, and asked them how they envisioned the future specifically of live theatre. Does it have a future? What will it look like?

JB answered back right away. “I believe the future is less about what ‘live theatre’ is or isn’t, and more about the further blurring of lines of categorization.” People will be less clear about the difference between “theatre” and “live performance” and “immersive” and “public art” and “interactive,” he said, “especially once ‘reality’ based technologies like AR/MR/VR invade the live sphere with faster and smaller real-time processing.”

Feeling slightly lost, I recalled a less lingo-laced comment from another “All The World Is A Stage” panelist Emilie Baltz, who describes herself as “a food technologist, experience designer, and multimedia artist.” She offered what she saw as the thread running through all the future-oriented storytellers’ projects at the festival. Their aim, she said, is “getting people to engage their bodies, not just their minds.” The cloud almost lifted.

Bookmark this page

Log in to add a bookmark

Comments

0
Add Comment

The article is just the start of the conversation—we want to know what you think about this subject, too! HowlRound is a space for knowledge-sharing, and we welcome spirited, thoughtful, and on-topic dialogue. Find our full comments policy here

Newest First