In 20 years at most we will be able to chat with the dog, sing with whales, respond to a roar. In short, conversing – really – with animals is no longer science fiction. Karen Bakker, a professor at the University of British Columbia, has no doubt: the breakthrough is just around the corner, thanks to advances in artificial intelligence. “We don’t yet possess a sperm whale dictionary, but we now have the ingredients to create one,” Bakker said in her book “The Sounds of Life” cited by the Financial Times in a lengthy article on the subject. The tool the scientists envision would be a kind of “Google translate for the zoo.”
As AI makes giant strides, Bakker advances the tantalizing possibility of interspecies communication. More. He also estimates the timing: in the next two decades, humans will use machines to translate and replicate animal sounds.
This sound revolution has been triggered by advances in hardware and software. Cheap, durable and long-lasting microphones and sensors are now being attached to trees in the Amazon, rocks in the Arctic or the backs of dolphins, enabling real-time monitoring. This bioacoustic data stream is then processed by machine learning algorithms, which can detect patterns in natural infrasonic (low-frequency) or ultrasonic (high-frequency) sounds that are inaudible to the human ear.
But, Bakker points out, these data only make sense when combined with human observations of natural behaviors obtained through the painstaking fieldwork of biologists or the crowdsourced analysis of amateurs. For example, Zooniverse, the citizen science research initiative that can mobilize more than a million volunteers, has helped collect all kinds of data and training sets for machine learning models. “People think artificial intelligence is a pixie dust to sprinkle on everything, but that’s actually not how it works,” Bakker says. “We are using machine learning to automate and accelerate what humans were already doing.”