Google's AI 'DolphinGemma' Decodes Dolphin Language
On National Dolphin Day, Google, Georgia Tech, and the Wild Dolphin Project unveiled DolphinGemma, an AI model trained on dolphin vocalizations. This model aims to decode the complex communication of dolphins, which has intrigued researchers for decades. The Wild Dolphin Project's long-term study since 1985 has provided a vast dataset of dolphin sounds and behaviors.
DolphinGemma uses audio technology to tokenize dolphin sounds and predict sound sequences. This AI model can be loaded onto waterproofed Pixel phones for underwater field research. Researchers hope Gemma will help uncover the unique "syntax" of dolphin sounds, revealing how they identify objects and communicate socially.
The WDP plans to use DolphinGemma with CHAT (Cetacean Hearing Augmentation Telemetry) to establish a shared human-dolphin vocabulary. CHAT uses synthetic sounds to interact with dolphins, associating sounds with objects like toys. Google hopes this AI will help researchers analyze dolphin sounds faster, potentially leading to two-way communication and a deeper understanding of these intelligent marine mammals.