Google, in collaboration with the Wild Dolphin Project and Georgia Tech, has developed DolphinGemma, an open-source AI model designed to analyze and generate dolphin vocalizations. Announced on National Dolphin Day (April 14, 2025), DolphinGemma is trained on a vast acoustic database of wild Atlantic spotted dolphins collected by the Wild Dolphin Project since 1985. DolphinGemma identifies patterns and predicts sound sequences, functioning as an audio-in, audio-out system. This 400M parameter model, based on Google's Gemma models and utilizing SoundStream, is compact enough to run on Pixel phones in the field. The AI aims to uncover hidden structures and potential meanings within dolphin communication, potentially establishing a shared vocabulary for interactive communication using the CHAT (Cetacean Hearing Augmentation Telemetry) system. By providing researchers worldwide with tools like DolphinGemma, Google hopes to accelerate the exploration of marine mammal acoustic communication and promote dolphin conservation.
Google's DolphinGemma AI Decodes Dolphin Language with Wild Dolphin Project and Georgia Tech Collaboration
Edited by: MARIА Mariamarina0506
Did you find an error or inaccuracy?
We will consider your comments as soon as possible.