In Pursuit of Silence Page 7
Among fallow deer, for example, bucks vocalize only during the mating season, producing a call known as a groan. Researchers from the University of Zurich have recently shown that the higher-ranking male deer in a herd produce the lowest groans—and that this ranking is also the best predictor of their odds for mating success. Female deer are magnetized by the same acoustic cues indicating dominant status that males rear back from. So important is this call to the future of the deer that at the height of the breeding season a buck will groan up to three thousand times per hour—making them sound more hoarse than a hard-smoking heavy-metal singer after sixty. At the peak of the rut, this hoarseness actually raises the pitch of the buck’s call (perhaps also signaling to its rivals a drop in testosterone-fueled fighting spirit).
Although this dynamic, wherein lower pitch indicates greater reproductive viability, holds sway throughout most of the animal kingdom, human noise may be starting to throw a wrench into the works. Studies of certain frog species have indicated that in areas with significant traffic noise, male frogs are being forced to raise the pitch of their call in order to make themselves heard. But the calculus for the croakers is tricky since the price of audibility is a commensurate drop in their appeal to females—and a reduction in their ability to threaten other males. Only by sounding littler and weaker than they actually are can these male frogs get the females to recognize they even exist.
John J. Ohala, professor emeritus of linguistics at the University of California at Berkeley, has tied the vulnerable submissiveness associated with higher frequencies to the acoustic origins of the smile. Smiling reduces the resonant cavity of the mouth, thereby raising the vocal pitch of sound emitted. He has also made the case that the larger the vibrating membrane, “the more likely it is that secondary vibrations could arise,” giving the voice in question an irregular, rough texture. When a sound includes a number of secondary vibrations, it will be less predictable. Since we are biologically programmed to associate unpredictability with danger, a low and rough voice is the most frightening. (It is not coincidental that one of the oldest, most widespread religious artifacts is an object called a “bull-roarer”—a piece of wood tied to a string which, when swiftly spun around, produces a loud, roaring, eerie noise. Versions of the bull-roarer have been found everywhere from ancient Greece to Mexico, Africa, Australia, and Ceylon, with its use varying between summoning the gods and chasing off evil spirits.) When I spoke with David Huron, head of the Cognitive and Systematic Musicology Laboratory at Ohio State University, on the subject of the male urge to make loud, deep noises, he said, “It’s all about pecking order. That’s why men don’t cry and why the pitch of their voice drops at pubescence.” But what about the fact that acoustical bluster is no guarantee that a physical fight will actually be avoided? “It’s females who have a low-risk reproductive strategy,” he fired back. “Males have a very low likelihood of contributing to the gene pool.”
By now I had a few ideas about the roots of the pursuit of silence. Our forebears sought quiet to hear threats and potential meals in motion more clearly, and because silence helped them focus. I also could see the way that making loud sounds might, on occasion, have provided a weapon to ward off enemies and attract erotic partners for whom, when it came to vocal cords, size mattered. But still I wondered, is it really true that all our reactions to sound are dictated by simple equations: loud, low sound equals something powerful, so be scared or be prepared to mate, or both; soft, little sound equals something small, so be calm, be prepared to mate, or both? What about noises that stimulate pleasure, pain, or fury apart from nearby mates and predators? How did these fit in with ideas of evolutionary psychology?
SEARCHING FOR THE FUNDAMENTAL FREQUENCY
“Sound is a brute force,” Daniel Gaydos, president of the Museum of Sound Recording, declared. He held a long-fingered hand flat in the air before his face and moved it slowly up and down. “That is a sound disturbance at two times a second,” he said. “Though you can’t hear it as sound. If you see a suspension bridge moving up and down once every two seconds, its pitch is half a cycle per second. We can see a vibration up to about twenty times a second. After that it starts to blur. When we can no longer track a vibration visually, we begin to hear it. Human hearing falls roughly between the range of twenty and twenty thousand wave cycles per second. Whenever you start thinking about sound, you’ve got to keep coming back to that idea of a physical force: objects vibrating and shaking off mechanical waves that clench and release the molecules of whatever medium they’re moving through.”
Gaydos is a tall, bony man with a mellifluous radio-announcer’s voice, the limpid eyes of a far-seeing bird, and a somehow predictable devotion to the more mystical later writings of Carl Jung. I visited him at the Institute of Audio Research, in New York City, where he is a professor. I wanted to understand what sound waves are and how they impact us, even when they don’t carry specific animal associations.
The institute, which offers a one-year degree to students aspiring to a career in music, occupies most of a narrow townhouse off Union Square. On the day I first visited, there was a large group of young men smoking out front of the entrance, almost all of whom were wearing gray hoods over dark hats over white iPods, along with very bright black-and-white sneakers. Under the hoodie of the man nearest the door was a T-shirt emblazoned with an image of a fierce, hairy orange monster wearing headphones and baring its fangs above a turntable on which it was scratching a record album.
Gaydos explained to me that a large part of what the institute offers these days is human contact, since almost all the students have been playing around in their rooms with digital recording technology for years. “Community is gone when it comes to the recording studio,” he said. We walked by framed gold and platinum record albums toward his office, passing students on cell phones and rooms filled with animated, mostly male students. They were wearing headphones behind the glass walls of a recording studio and wearing headphones in sound-mixing class while staring at display screens. Students at the institute, Gaydos told me, are “amazingly open” to listening to new sounds, so long as these sounds are coming through headphones. When I later asked Gaydos if he saw commonalities between the students, he thought for a moment and then remarked that attention deficit disorder seemed to serve as a kind of “common language” among his pupils.
In fact, research indicates that music heard through MP3 players or other devices can function as white noise that enables people with attention deficit disorder to concentrate. There are differing explanations for why the brains of many ADD and autistic individuals need noise to focus, but the point is worth bearing in mind when we think about our loudening world. Much professional analysis of noise problems concerns the “signal-to-noise ratio.” Simply put, signal-to-noise expresses the ratio of sound carrying significant information to that of the ambient noise surrounding that sound. It could be that when white noise enters an already disordered cognitive system, it resonates with the signal, amplifying it and preventing the signal from being lost amidst competing sounds. Alternately, a steady noise under the individual’s control may mask the distraction of other, novel stimuli. Either way, the tendency of more and more people to turn up the sound in more environments may reflect a rise in ADD and autism spectrum conditions across the general population. If this is true, increasing numbers of people whose brains need noise to function optimally may find they have to keep turning up the volume further as the noise made by other people doing the same thing drowns out their personal distraction-of-choice.
As we sat down in an empty classroom, Gaydos began drawing different musical notes and charts on the blackboard. He told me that he wanted to talk about the kinds of relationships between frequencies most likely to trigger serenity, as David’s harp soothed the madness of Saul. Pythagoras, whose followers were required to maintain a strict vow of silence for up to five years—during which time, as “auditors,” they learned the virtues of listening and “contin
ence of words”—was the most renowned early explorer of the numerical relationships that find expression in the acoustical universe. In a legend passed down through the ages, Pythagoras was passing a blacksmith shop one day when he realized that the notes produced by the hammers were directly proportional to their weights. He rushed back to his workshop, hung four strings to pegs in the wall, fixed hammers of differing weights to these strings, and began discovering the mathematical ratios behind the familiar harmonic intervals of musical expression. In the course of his experimenting, he found that if you snip a vibrating string in half you raise its pitch exactly an octave. When you play two notes an octave apart, so that the frequency of vibration of one note is double the one below, you produce what the Greeks idealized as the most pleasing ratio to the ear. Gaydos calls it “the easiest relationship to feel” and suggests that what may make it so gratifying a sound is our ability to actually hear the two sound waves fit in an octave nestling snug one inside the other. Cutting up more strings to lengths based on divisions of fifteen, Pythagoras was able to reproduce the entire musical scale.
The Pythagorean moment at the blacksmith’s has been discredited in modern times by authorities who point out that the sound of hammer blows do not vary based on the weight of the tool, and that Pythagoras wasn’t the first to understand the musical relationships he’s credited with. His uncontested achievements lie more in the metaphysical line that led him to hypothesize that music, when rightly proportioned, could infuse the harmonic order of the universe into the listener, thereby exerting a benign moral influence. The numerical rules of the sound world operate with a consistency so striking that when he began to grasp their power, Pythagoras compared the entire universe to a musical instrument that brought dissonant elements into harmony. Each planet, Pythagoras believed, produces a note in the course of its orbit based on its distance from the still center of the Earth by the same laws that determine the relative pitch of different-length strings. Apollo, the sun god, conducted the music of the spheres with his lyre.
Augustine added a Christian framework to the ethical resonance of well-ordered sound, writing that through the mathematical ratios governing holy music one could intuit God’s harmonious arrangement of the universe, which provided a template for the proper structuring of individual existence. The medieval church drew heavily on the so-called “sacred geometries” that embodied these ratios in all of its this-worldly constructions. Saint Bernard wrote: “What is God? He is length, width, height and depth.” The visual grace of Cistercian architecture, which is often referred to as an “architecture of silence,” derives from the way it conforms to the ratios of musical harmony. “There must be no decoration, only proportion,” remarked Saint Bernard. And the ideas of sacred order, in turn, found their way back into the modes and cadences of Gregorian chant.
These ratios still carry power. In the summer of 2008, Universal Records released a recording of monks chanting in one of the oldest Cistercian monasteries in the world, Stift Heiligenkreuz, near Vienna. The album soared in popularity, for a time eclipsing sales of superstars like Madonna. Coincident with the release of Chant: Music for the Soul, Alan Watkins, senior lecturer in neuroscience at Imperial College, London, released a report on research conducted by his team that found “the regular breathing and musical structure of chanting” could have positive physiological effects. Watkins’s findings followed on earlier studies that showed how chanting reduced blood pressure and upped the levels of the performance hormone DHEA. The beneficial physical and mental effects of Cistercian chanting described by Watkins looked a lot like the ones I’d seen associated with silent meditation.
Indeed, Gaydos told me that the sacred geometries are part of the acoustical “bible” that professionals like him make use of all the time. “Anyone who builds a recording studio is aware, on some level, of the golden proportions, and the mathematical ratios and sequences that were subsequently developed from these equations,” he said. They have to be, Gaydos believes, because for sound purposes those proportions have rarely been improved upon. “Traditional churches sound beautiful,” he remarked. Some of the most successful recording studios in the world have, in fact, been churches—both the celebrated Columbia and Decca studios in New York City were converted church buildings, the former an Armenian church with a hundred-foot-high ceiling. It’s only in very recent times, Gaydos believes, that church architects have abandoned Pythagorean conventions to create spaces that are, he said, “completely fucked up acoustically.”
In a cosmos composed with such attention to harmonic order, issuing so awesome an invitation to unity and coherence, a note that does not play its rightful role in the larger pattern can be as disruptive as a conforming sound is gratifying. Thinking of this call to synchronize with the measure of the world, the medieval philosopher Boethius declared, “We love similarity, but hate and resent dissimilarity.” (A contemporary sound designer I spoke with, who produces CDs to help children fall asleep, described his process of combining sounds like chimes, bubbles blowing, waves, harps, and a heartbeat as one of “minimizing change and harmonic tension” in order to “reach habituation.”) Along these lines, Gaydos offered a definition of noise from the world of acoustical science. “Noise,” he said, “has more frequencies than musical sound and the frequencies are not related.” The way in which certain “unrelated”—that is, dissimilar, uneven—clusters of frequencies strike the ear can have dire real-world consequences.
Every structure, organic and nonorganic, has a special frequency at which it naturally vibrates when energized into motion. We might think of the situation this way: on some level, everything likes to be still—to be silent—as much as possible. But failing that perfect state of rest, there are specific speeds at which every structure is predestined to vibrate. An ashtray, a car steering wheel, a violin string, and the vocal folds of a child—each has its own fundamental frequency.
What makes for the fundamental frequency of a given object? That object’s mass and tension. In other words, the core physical truth of every structure determines the way it dances and shimmies off sound waves into the universe. An object’s fundamental frequency is a kind of naked snapshot of identity. By definition, this snapshot also reveals the other structures that the object is most likely to mutually resonate with.
Our auditory systems seem to pursue fundamental frequency as an object’s calling card. Manufacturers of mini-speakers with limited bass range know that by multiplying frequencies in the upper harmonic pattern it’s possible to make people feel they are also hearing the low, fundamental harmonics with which the upper range is typically associated. But the bass boost is all in the mind. Similarly, when we’re playing a five-foot grand piano, the lowest key, vibrating at 27.5 times a second, is barely audible. What we hear when that bottom A is played is, in fact, the upper harmonics overlaying it. And yet, Gaydos says, what we’re always straining for, even if only subconsciously, is the fundamental frequency. “We’re always searching for the floor of things,” he said.
Perhaps, then, we listen for the fundamental frequencies to try and perceive the bodily truth of what’s radiating a particular sound. Only the fundamental frequency, the telltale pulse of mass and tension beneath all the noise, can signal to us how we ought to orient our own body in relation to the sound maker.
So what happens in the physics of unwanted sound? I talked with Andy Niemiec, the director of the Neuroscience Program at Kenyon College, about the effects of violating intervals between frequencies like the octave that strike our ears so benevolently. Niemiec is studying the ways that certain changes in harmonic structure trigger aggression. In particular, he’s looking at the ways that the harmonic relations in infant cries put them at risk for physical abuse. “There’s the base frequency of the sound,” Niemiec explained, “the fundamental, and then harmonic overtones, multiples of that fundamental. Any complex real-world sound has this structure. Tap a metal plate,” he said to me, “and you’re stressing that plate. Th
at action is going to change the plate’s harmonics in ways we may have evolved to find irritating. There’s a lot of stuff out there that could be taken to argue that certain changes in harmonic levels relative to each other cause some sort of physiological change in us.”
Gaydos agrees. “Remember, the Earth spins at a particular frequency, even though we don’t hear that frequency as sound,” he told me. “There are correlations between the earthly vibrations that we have adapted to as part of our evolutionary process.” As he spoke, Gaydos began manipulating sound waves on Pro Tools, the software program that is now used to edit most film scores and other music, creating different, color-coded harmonic configurations and dissonances in celadon, pink, scarlet, and black patterns evoking the image of our planet as a great vibrational quilting bee.
Recently a few researchers at medical schools in the United Kingdom began looking at what happens in the brain when we listen to unpleasant sounds, like chalk on blackboards. Different bands of acoustical energy imprint themselves on different parts of the auditory cortex. By analyzing where the brain maps sounds that the study participants said they disliked, the researchers hoped to identify the spectrum of auditory representations that rub us the wrong way.
The study, which was published in the Journal of the Acoustical Society of America in December 2008, measured the effects of seventy-five different sounds, rating them from most to least unpleasant. The two top offenders were sounds that fortunately are rarely heard outside of certain last-ditch drinking establishments: “scraping a sharp knife along the surface of a ridged metal bottle” and “scraping a fork along glass.” But female screams and baby cries also ranked alarmingly near the top of the list. (Baby laughter was rated as the least unpleasant sound, followed by “water flow,” “small waterfall,” “bubbling water,” and “running water.”) The researchers found that energy in the frequency range of two to five kilohertz will almost invariably be perceived as obnoxious—roughly the same range that Niemiec singled out in his work on aggressive responses to infant cries.