Karen Bakker is a geographer who studies digital innovation and environmental governance. Her latest book, The Sounds of Life, trawls through more than a thousand scientific papers and Indigenous knowledge to explore our emerging understanding of the planet’s soundscape.
Microphones are now so cheap, tiny, portable, and wirelessly connected that they can be installed on animals as small as bees, and in areas as remote as underneath Arctic ice. Meanwhile, artificial intelligence software can now help decode the patterns and meaning of the recorded sounds. These technologies have opened the door to decoding non-human communication — in both animals and plants — and understanding the damage that humanity’s noise pollution can wreak.
In an interview with Yale Environment 360, Bakker, a professor of geography and environmental studies at the University of British Columbia, describes how researchers are constructing dictionaries of animal communication, focusing on elephants, honey bees, whales, and bats. “I think it’s quite likely,” she says, “that within 10 years, we will have the ability to do interactive conversations with these four species.”
Yale Environment 360: What inspired you to write this book?
Karen Bakker: I’ve taught a course on environment and sustainability for the past two decades, and every year the picture is grimmer. My students are dealing with a lot of ecological grief and climate anxiety. I wanted to write a book for them. They are digital natives. Digital technology is so often associated with our alienation from nature, but I wanted to explore how digital technology could potentially reconnect us, instead, and offer measured hope in a time of environmental crisis.
In part, the idea about sound came from the work that I was doing with Indigenous communities. I was really struck by Indigenous teachings about being in dialogue with the nonhuman world. Such dialogues are not merely allegorical or metaphorical, but real exchanges between beings with different languages. Robin Wall Kimmerer writes in Braiding Sweetgrass that in Potawatomi teachings, at one time all beings spoke the same language, and that has fractured.
As I started to delve into these topics, the world of digital bioacoustics was just opening up — there’s been a literal explosion in research in the last 10 years, and I caught that wave. I was fascinated by scientists rediscovering some things that Indigenous communities have long known, with very interesting digital experimental methods.
e360: You looked at more than 100 species, including some obvious noise makers and sound detectors like whales and bats. Can you give an example that surprised you?
Bakker: Peacocks make infrasound with their tails in the mating dance. We used to think the big tail was a visual display, and it is. But they’re also making infrasound with their tails at a specific frequency that vibrates the comb on top of the peahen’s head. We’ve known about that mating dance for probably thousands of years, but we only just figured out that it’s got a sonic component.
“Octopi hear in their arms with little organelles. There’s a myriad of ways nature has invented to hear that don’t involve ears.”
e360: You also cover species we traditionally think of as silent, such as coral larvae and plants. How do creatures that don’t even have ears hear?
Bakker: They are hearing: they are sensing sound, and they are deriving ecologically meaningful and relevant information from that sound.
Coral larvae, which are microscopic organisms, are able to distinguish not only the sounds of healthy versus unhealthy reefs, but to discern the sound of their own reef and swim towards it, even from miles away across the open ocean. That puts them in the same category as great bird migrations, given their size. We don’t really fully understand how this is happening; we’ve only just learned that they’re capable of doing it.
Heidi Appel at the University of Toledo did this great experiment with plants: plants are played the sound of insects chomping on plant leaves, and they react with the release of defensive chemicals. These plants only responded to the sound of the insect that is their predator. They don’t respond to the sound of an insect that does not predate on that plant. They have these little hairs on the outer surface of their leaves that are analogous to cilia, the hairs that are in your ears. We think that any organism that has little cilia hairs can hear. There are other things used to hear, too: octopi hear in their arms with little organelles. There’s a whole myriad of ways nature has invented to hear that don’t involve ears.
e360: Is the plant communication result controversial?
Bakker: It’s robust and easily replicable. Where it’s controversial is how you interpret it. There’s been a big debate about whether we should call this “plant intelligence,” and that hinges on your definition of intelligence. If you believe that intelligence is an ability of an organism to receive information from the environment and use that to adapt and thrive and problem-solve, then, yes, plants are intelligent. This is an ongoing debate.
e360: You show how acoustic work has revealed surprisingly complex communication. Elephants, for example, have a separate warning call for the danger of bees versus the danger of people.
Bakker: And for different tribes, some of which don’t hunt the elephants. They have highly specific descriptions of their environment.
e360: How far have researchers come in understanding these languages?
Bakker: Several teams of scientists are constructing dictionaries in animal communication, with special attention to elephants, honey bees, whales, and bats. These are highly vocally active species; they all exhibit a high degree of social behavior; they all have long-lived cultures and transmit certain vocal markers over generations. Bats have songs that they teach to their young, much like birds do. So, these are good candidate species for research using large datasets — we’re talking millions of vocalizations — using artificial intelligence to decode the patterns.
During the pandemic, “sound levels went back to the 1950s. And in that quiet we found a lot of animals recovering.”
Tim Landgraf in Berlin has created a robotic honey bee encoded with sounds taught to it by an artificial intelligence algorithm that can go into the hive and tell the bees where a new source of nectar is. It can do the waggle dance, and they will understand. We’ve broken the barrier of interspecies communication, which is amazing. I think it’s quite likely that within 10 years we will have the ability to do interactive conversations with these four species, with a couple hundred words.
e360: That’s amazing, but it also raises a lot of questions, as you discussed in your book, about whether we will listen and whether we will want to hear what these creatures have to say.
Bakker: It does. At best, what one could hope for is kind of another era analogous to the Enlightenment, whereby we come to understand that many of our cousins on the tree of life have a greater degree of sentience, intelligence, and language than we had previously thought. This undermines human exceptionalism — humans are no longer the center of the galaxy — but opens up more empathy, more of a sense of kinship with other species. We are relearning and rediscovering what indigenous communities have long known about the importance of dialogue.
It’s very important to mention that this comes along with a commitment to Indigenous data sovereignty: we have to rethink the way in which we harvest the data from places, which are often territories under Indigenous ownership and stewardship. The Maori for example, have outlined a convincing legal argument that Maori data should be subject to Maori governance. And that includes the electromagnetic spectrum. That includes sound. There are a whole set of practices. I think the bioacoustics community doesn’t consistently engage with this yet.
e360: What about noise pollution; how serious is it?
Bakker: Even ambient levels of noise pollution that we accept on a daily basis in most cities have been associated with human health risks: cardiovascular risks like increased risk of stroke and heart attack, cognitive impairment, developmental delays, dementia.
e360: And it’s especially bad under water, where sound travels further than light.
Bakker: That’s right. These creatures are exquisitely sensitive to sound and use sound as their primary way of navigating the world. Noise pollution can reduce their ability to find food, hamper their ability to mate. Loud motorboat noise can literally deform or kill embryo fish embryos and their eggs. Seismic airgun blasts can kill zooplankton up to a mile from the blast site; they’re the basis of the food chain.
One study that I think is absolutely remarkable was just released on marine seagrass: Posidonia oceanica. Seagrass is under threat. And a European team found that sound blasts can distort the plants. It’s as if a loud sound blast rendered you deaf, exploded your stomach so you couldn’t absorb any food, and knocked you off balance. That’s what loud sound does to these plants.
e360: What can be done about it?
Bakker: One silver lining is that as soon as you reduce the level of noise, there is an immediate, significant, and persistent benefit, unlike chemical pollution, which can take decades or centuries to degrade. Elizabeth Derryberry went out in San Francisco during the pandemic and found that birds were immediately responding to the quiet by singing songs with more fulsome vocalization ranges and more complexity. Scientists who study acoustics called the pandemic the ‘anthropause’ because sound levels went back to the 1950s. And in that quiet we found a lot of animals recovering.
“You put speakers underwater and play the sound of healthy reefs, and you can attract fish and coral larvae back to degraded reefs.”
e360: Is climate change also affecting the planet’s soundscape?
Bakker: Some great elders and grandfathers of this field, like Bernie Krause and Almo Farina, talk about the fact that climate change is “breaking the Earth’s beat.” The Earth has an acoustic rhythm that’s partly biological and partly geological, coming from ocean waves breaking over continental shelves, volcanoes, and calving glaciers. Climate change is altering that. If it’s hotter and drier, birds have a harder time singing into the dawn; sound travels further when it’s humid. And animals move. They become climate refugees looking for new habitat, no longer making sound in the places they used to. Some places go very quiet.
Noise pollution is like a pea soup fog: we cannot see our hand in front of our face. Climate change is like introducing a lot of static into the cell phone network.
E360: Can sound be harnessed as a tool for good?
Bakker: Yes; you can use the sounds of healthy reefs, for example, as a form of music therapy for coral. The technical term is acoustic enrichment. You put speakers underwater, you play the sound of healthy reefs, and you can attract fish and coral larvae back to degraded reefs. They’re doing that in the biggest reef restoration project in the world off the coast of Indonesia.
e360: Have you found a way to convey the sounds of other species to people?
Bakker: When I give talks, one of the first things I do is I bring the voices of other species into the room. Sometimes I ask people to guess: who’s making those sounds? And it’s so hard. People are truly shocked by some of the intricate noises that other species can make.
Spoiler alert, I’m working on a multimedia project that hopefully will be out next year where people can experience some of this in other ways.
e360: What else is next for you?
Bakker: My next book is called Smart Earth. The Smart Earth Project examines how we might use the tools of the digital age to solve some of the most pressing problems of the Anthropocene, be that biodiversity loss or climate change.
One example is the work of Tanya Berger-Wolf, at Ohio State University. She basically developed a barcode reader, first for zebras, and then a more general app that basically can identify any creature with scars, stripes, spots, markings. She’s on a mission to create a unique database of individuals of species on the IUCN Red List. Her work got taken up by the Kenyan government.
Another example is a program off the coast of California that uses bioacoustics, tagging, satellite monitoring, and oceanographic modeling to pinpoint the location of whales, to inform ship captains so they can slow down or avoid the areas where the whales are, in real time, to avoid ship strikes.
We now have an abundance of data and the tools to enable us to do real-time precision regulation that is preventive and predictive rather than reactive. It applies to endangered species protection; it applies to greenhouse gas emissions. That is going to totally change the landscape for environmental protection.
This interview has been edited for length and clarity.