Posted by gsf_emergency_6 4 days ago
These hydrophones are a bit more expensive (~$1k per deployment) but still very accessible compared to how much it usually costs. And the goal is to bring the cost down to the ~$100 range (so $5 is very impressive!):
https://experiment.com/projects/can-low-cost-diy-hydrophones...
All the data is being saved (used for scientific research & ML training), with some of the hydrophones going back to 2017, and yes it's quite difficult to listen to and review so much audio. Better tools like the hydrophone explorer UI are much needed (been working on something similar).
One of the things that's surprised me the most is how difficult to keep hydrophones up and running. I can sympathize with both the technical and social challenges—underwater is not a friendly environment for electronics, and it can be difficult to get permission to deploy hydrophones. But it's incredibly rewarding when it works and you capture some cool sounds.
For anyone interested, all the code is open source and acoustic data is freely available:
Code: https://github.com/orcasound/
Data: https://registry.opendata.aws/orcasound/
Community: https://orcasound.zulipchat.com/
Where could I learn more about requirements for this as a I love building tools like this.
Requirements are pretty flexible, but the inspiration is largely iNaturalist, and also this very cool project put together by Google Creative Lab back in 2019 https://patternradio.withgoogle.com/
Best place to learn more is to stop by the community Zulip chat (https://orcasound.zulipchat.com/) and ask questions, it's full of really knowledgeable people. Also you can explore the entire codebase here: https://github.com/orcasound/orcasite
I worked on DAS acoustic monitoring for subsea power cables (to monitor cable health!), turns out they are basically a submarine detection system.
For interest:
* it's one reason we know so much about ocean tempretures and tangentially have great data on climate change being real, and
* they had some cool R&D vessels:
FLIP was originally built to support research into the fine-scale phase and amplitude fluctuations in undersea sound waves caused by thermal gradients and sloping ocean bottoms. This acoustic research was conducted as a portion of the Navy's SUBROC program.
~ https://en.wikipedia.org/wiki/RP_FLIPSupposedly new submarines are so quiet that they can't be detected anyway. I'm sure there's a large element of exaggerating abilities here, but there's definitely an element of truth: in 2009, two submarines carrying nuclear weapons (not just nuclear powered) collided, presumably because they couldn't detect each other. If a nuclear submarine cannot detect another nuclear submarine right next to it then it's unlikely your $5 hydrophone will detect one at a distance.
Of course, none of this means that the military will be rational enough not to be annoyed with you.
[1] https://en.wikipedia.org/wiki/HMS_Vanguard_and_Le_Triomphant...
https://www.birds.cornell.edu/home/deep-listening/
https://depts.washington.edu/uwb/revolutionizing-marine-cons...
Very cool and very powerful technology, it'll be interesting to see how fiber sensing progresses, especially with how much undersea fiber already exists. For subsea power cables, is there a parallel fiber dedicated just for DAS monitoring? Do these get bundled in with data fiber runs as well? I've been curious how well DAS can work over actively lit / in-service fiber.
A supplier played whale song they recorded from cables, and said they repackage and sell the same product to defense contractors.
https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M...
(Search for hydrophone)
Recording full-fidelity whale or dolphin sounds (amongst others) requires using a higher sample rate than is available in most consumer-grade equipment. There's a lot more information down there!
Here [1] is a page at Klover, and here [2] is one at Shure. Not sure if there's a formal specification for this, or if it's just something that manufacturers started doing.
[1]: https://www.kloverproducts.com/blog/what-is-plugin-power
[2]: https://service.shure.com/s/article/difference-between-bias-...
https://github.com/Vivek-Tate/IDS-Detection-and-Exploiting-V...
Information here from a superb podcast
Most bioacoustics work now is: deploy a recorder, stream terabytes to the cloud, let a model find “whale = 0.93” segments, and then maybe a human listens to 3 curated clips in a slide deck. The goal is classification, not experience. The machines get the hours-long immersion that Roger Payne needed to even notice there was such a thing as a song, and humans get a CSV of detections.
A $5 hydrophone you built yourself flips that stack. You’re not going to run a transformer on it in real time, you’re going to plug it into a laptop or phone and just…listen. Long, boring, context-rich listening, exactly the thing the original discovery came from and that our current tooling optimizes away as “inefficient”.
If this stuff ever scales, I could imagine two very different futures: one is “citizen-science sensor network feeding central ML pipelines”, the other is “cheap instruments that make it normal to treat soundscapes as part of your lived environment”. The first is useful for papers. The second actually changes what people think the ocean is.
The $5 is important because it makes the second option plausible. You don’t form a relationship with a black-box $2,000 research hydrophone you’re scared to break. You do with something you built, dunked in a koi pond, and used to hear “fish kisses”. That’s the kind of interface that quietly rewires people’s intuitions about non-human worlds in a way no spectrogram ever will.
Why not? You can run BirdNET's model live in your browser[0]. Listen live and let the machine do the hard work of finding interesting bits[1] for later.
[0] https://birdnet-team.github.io/real-time-pwa/about/
[1] Including bits that you may have missed, obvs.
RTL-SDR is another area where this there is so much to see 'hidden' in electromagnetic radio frequency space.
Can we now have lot of audio records with a documentation of whale behavior to train an AI and get a whale-translator at the end?