Illustration by Christian Petersen

Feature

Swimming in Sound

Ambisonic technology offers an entirely new way to hear, and it’s coming soon from engineer-artists near you.

Listen: You’re not hearing as well as you could be.

Your ears aren’t the problem, and neither is your brain; they’re part of a system astoundingly fine-tuned to sense molecular changes in air pressure and translate them into usable information. It’s not a matter of media, either, analog versus digital. No, it’s the devices we use to record and reproduce sound that are lacking—the audio components that play the music you listen to, the movies you watch, the video games you plug into.

They could all perform in way that feels bigger, fuller, realer. They could envelop you in sound in three dimensions, all but erasing your presence in a room, and transport you into a specific, holistic environment that may or may not exist in reality, like a choral recital in a glass-walled underwater cathedral or the front row of Beyoncé’s spectacle at Coachella. And in the not-so-distant future, they will.

Audio-capturing and -playback technology is evolving to match the nuance of human perception, and some of the field’s leading research is happening right now at the University of Washington. For the last 15 or so years, a small cadre of psychoacoustic explorers from around the world—composers who are engineers who are metaphysicists who are musicians—have been investigating a 40-year-old technology known as ambisonics. The word is a portmanteau of ambient sonics, sound in a space. Alternately referred to as 3D, spatial or holographic sound, the form is to hearing what virtual reality is to seeing. But with a twist. Because if seeing is believing, then hearing is…something else.

You’ve probably experienced the closest approximation to ambisonics, Dolby 5.1 Surround Sound, at a well-appointed movie theater or fancy home theater. Without getting too wonky, this simpler type of playback splits the audio signal of a film or album into six separate channels—two in front, two in back, one center and one low-end boost. Sounds zoom around the room from one speaker to another but remain in a single, flat plane. They come at you individually, like lightbulbs flashing in a marquee or pixels aligning to form an image.

In contrast, ambisonics show the whole picture all at once, creating a seamless sphere of sound. It does so through specialized encoding and decoding software that allows the recording artist to fine-tune sounds and alter their placement in imaginary space, recreating an actual location or evoking a fictional one. Ambisonics evangelists enthuse about its scalability, which they refer to in “orders,” each achieving a successively higher level of resolution via a greater accumulation of hardware. First-order ambisonics sound pretty good on a regular stereo or a pair of good headphones. Higher-order ambisonics require a dozen or more speakers and produce the audio version of HDTV, a resolution so sharp it’s uncanny. In that realm, all those speakers disappear from your perception, vanishing into a cohesive, contiguous wash of sound. The effect is immersive—a little disorienting, a little psychedelic, totally unique.

When ambisonics was first introduced by Oxford-affiliated researchers in the 1970s, it was a proprietary technology that required highly specialized recording and playback equipment that was beyond most commercial and consumer applications; The technology remained obscure and unused for years. Today, it’s showing up in niche settings—video games, virtual reality, even a specialized theater in San Francisco built for ambisonic performances. Researchers at UW and Stanford and a few private companies in Seattle and the Bay Area are working in open-source code that demands far less computing power than early ambisonic software. Still, it remains a rarified technology. The goal, according to the artist-technicians at UW, is not only to make ambisonic sound more accessible to the average person and to elevate their understanding of the power of audio. It’s to create a new type of immersive audio experience that has been, up to now, unprecedented.

What we’ve known is Dorothy in Kansas. What we’re heading for is the Land of Oz.

Alternately referred to as 3D, spatial or holographic sound, the form is to hearing what virtual reality is to seeing. But with a twist. Because if seeing is believing, then hearing is…something else.

“The problem with visual information is that we’re very good with it,” says Juan Pampin, professor of music composition at UW and director of the University’s Center for Digital Arts and Experimental Media (DXARTS). “We very quickly realize, this is not real. With virtual reality we’ve had that problem for many years.

“With our ears we’re a lot less picky. Our perception works in a different way,” he continues. “As soon as we hear something that sounds like something, and we’re not using our eyes to confirm what it is, we tend to believe it. Which is remarkable because it means that you believe what you want to believe, and the way our auditory system helps with that is amazing.”

This leap of perceptive faith, this brain magic, happens a thousand times a day. You’re sitting inside a café and a bus rolls by outside, or you’re walking and hear footsteps behind you, or a coworker coughs on the other side of your cubicle: You can’t see what’s going on but your ears tell you.

“So that opens a new window into how you can compose, how you could artistically explore trickery of the mind. You can create these aural-cinema narratives, long stories, that are done just through sound projection.”

I’m speaking with Pampin in his office in Raitt Hall, the hub of UW’s DXARTS program, an interdisciplinary, graduate-level think tank and degree program that spans art, design and tech. Pampin’s works as a composer have been performed around the world; his Percussion Cycle, a four-movement excursion driven by kettle drums, conga, mbira and a host of other percussion instruments, plus electronic effects, is perhaps his signature piece. In 2006, it was played by the venerable French sextet Les Percussions de Strasbourg at Palais des Fêtes, in Strasbourg, France, and recorded live by Pampin and his colleague Joe Anderson using their Ambisonic Toolkit, the open-source storehouse of software they’ve been developing for years. Even in low-order ambisonics, the CD sounds rad on my car stereo.

Born in Argentina, Pampin has spent his 17-year tenure at UW—and at Stanford before that, and the Conservatoire National Supérieur de Musique de Lyon in France before that—expanding the ways humans hear. His efforts come in the form of his compositions, ambient and abstract but intense and thrilling, but also through his forays into technology, specifically ambisonics.

Anderson has been working with ambisonics even longer than Pampin, his academic pursuits augmented by stints in the private sector. As lead author of the Toolkit, his goal is to make the technology as granular as possible. That way anyone can customize their individual version of the software to the tiniest technical specification, an ethos that echoes the meticulous detail of ambisonics itself.

“The idea is, if you’re the artist, you need to mix your own paints and make your paintbrushes,” Anderson says. “And if you can do that, then you can make your own thing rather than just push the button.”

To illustrate Anderson’s point, he and Pampin lead me to the basement of Raitt and into a smallish, soundproofed studio. Inside are 24 speakers laid out roughly in a sphere: several are in a ring at my feet, angled toward my head; another ring on pillars at ear level and more suspended overhead, aimed down. They stand me in the sweet spot at the center of the speaker sphere, where they’ve calibrated each speaker to project its sound, timed to the microsecond to arrive in sync. “It’s kind of a paradox—you want more speakers to make them disappear,” Pampin says.

Anderson describes the snippet of the piece of his he’s about to play, which he says shows the “holographic” characteristics of ambisonics. It is, he says, the sound of a tractor played like a drum that sounds like a bell. He tells me to close my eyes, “because your visual system wants to associate sound with things that you can see,” and what I’m actually looking at is a rather clinical chamber full of plastic doodads and exposed wiring.

Then he starts the piece: Twenty or so seconds of sonorous clangs, tactile like the soothing hum of a prayer bowl, percussive like a tennis ball ricocheting across a gamelan xylophone. And I’m in the middle of it, or it’s all around me. On repeat.

After several loops, Anderson stops the playback. I’ve just experienced the audio equivalent of an abstract painting—de Kooning, perhaps. But then again, a painting is a static thing, hanging mute on a wall. The full scope of this piece stretches through time, unfolding sequentially like music, enveloping me in space with sound, moving around me. It’s transportive and weird, pulling a dumbfounded smile across my face. It is an altogether different type of artistic interaction than I’m used to.

“The most interesting thing for us as composers is the ability of recreating spaces and making them feel real or even hyperreal, more real than real, and then turning into surreal,” Pampin says. “Creating that illusion, and the ability also to turn it into something else and manipulate it, offers a lot as an artist.”

Google maintains an open-source ambisonics software archive. Facebook 360, the social-media node’s interactive video-streaming app, supports ambisonic audio. Amazon and GoPro reportedly run their own research divisions, as does Microsoft, which has used the technology in a handful of video games. Bungie, the Bellevue-based makers of Halo and Destiny, are keenly interested in incorporating it into their games. Hardcore tinkerers and audiophile hobbyists dabble in it and a handful of avant-garde musicians record with it. The software is freely available but the hardware—high-end microphones and speakers—is exorbitant. In other words, the technology has yet to reach mainstream users.

But it’s getting close. A company called Envelop launched in San Francisco in 2015, offering its own open-source ambisonic software to artists and hosting events—including audio workshops, listening parties for Pink Floyd’s Dark Side of the Moon or ambient music by Ryuichi Sakamoto, “Envelop Yoga” sessions—in its 28.4-channel theater in the city’s Dogpatch neighborhood. Envelop also has a portable version of their sound system.

Pampin is positive that we’re not too far from low-order ambisonics for the home. His students are less confident.

“I just can’t see it ever getting to home use,” says Daniel Peterson, a DXARTS grad student in composition. The modular nature of the technology is both its beauty and its barrier. Customization requires serious expertise in fine-tuning the setup, and if the software crashes or one were to rearrange their listening space, they’d have to calibrate the system all over again. Most home systems wouldn’t operate in higher-order ambisonics, leaving compositions recorded in higher order falling short. “There’s still a lot of hoops to jump through, some things that need to happen, and we really don’t know when they will,” he says.

Peterson and another DXARTS student, Adam Hogan, are collaborating on an ongoing ambisonic project this year, centered on the audio and visual contrast between natural and industrial forests, as well as the ineffable sensation both create. They’ll present a portion of the piece on Feb. 21 at Meany Hall as part of DXARTS’ Music of Today series, ideally with an oversized screen for Hogan’s visuals (he’s a cinematographer) and a decent ambisonic sound setup for Peterson’s score.

If Pampin and Anderson are ambisonic’s first generation, these guys are the next. Despite Peterson’s skepticism, he’s sold on the artistic promise of the technology.

“I’ve always thought that it’s the future of music,” he says. He describes music as a matter of scales—spectrums of volume, timbre, pitch that have been explored for millennia—“and one of the scales that’s only now able to be explored, because of this technology, is space. Joe always talks about how ambisonics is a way of thinking. It’s a way of thinking about space.”

Psychoacoustic spaces that exist only in the mind of the composer, spaces that the composer can shift and change throughout the course of a piece.

Hogan describes the process of directing a viewer’s gaze through focus and exposure and physical maneuvering of the camera during filming—a cinematic technique well understood in today’s visual culture. This control over audience perception is similar to composing in ambisonics, and he and Peterson are hoping to overlap the techniques in their projects.

“We’re thinking about how we can borrow from each other’s methodologies,” Peterson says.

But mostly the luminaries of ambisonics work entirely without images. During last November’s Music of Today concert, Anderson presented a piece by one of his mentors, a UK-based engineer and composer named Jonty Harrison. Anderson had spent a day configuring Meany Hall with a 24-speaker, higher-level ambisonic array to prepare it for Harrison’s 2015 composition “Going / Places,” which they configured for the room using the Ambisonics Toolkit.

I was there. It was heavy.

Using 25 years of field recordings, Harrison created an aural travelogue that spanned the globe—railway stations in Europe, a rainforest in Borneo, a religious procession in Thailand, insects during a lunar eclipse in Australia—and strung them together through meticulous editing. Some segments were introduced via signaling, like a train or ship whistle or the sound of an airplane taking off. Others segued into one another through inscrutable sonic effects.

My mind was engaged in a wholly novel way, as if I were reading a novel without words or watching a film without images. Presented with a torrent of audio information and acculturated to a visual narrative that was entirely absent, it filled in the blanks with snippets of memories, fantastic imaginings, suggestions of scenes. The sound was fiction and nonfiction, documentary and art film—the “hyperreal” that Pampin described. Engaging with it required imaginative effort, both thrilling and exhausting, like engaging a long-dormant muscle. And now that that muscle is activated, I’d like to put it to use.