How to Process Real-Time Underwater Audio Data with WebRTC and Node.js
How to Process Real-Time Underwater Audio Data with WebRTC and Node.js
Underwater monitoring systems—whether tracking marine life communications, detecting equipment anomalies, or conducting environmental surveys—require streaming audio data in real-time with minimal latency. If you're building a marine monitoring application that needs to capture, transmit, and analyze underwater audio feeds, this guide walks you through implementing real-time audio processing with WebRTC and Node.js.
Why Real-Time Underwater Audio Matters for Developers
Underwater robots and hydrophone networks generate continuous audio streams. Unlike traditional file-based processing, these applications demand:
- Sub-second latency for actionable alerts
- Reliable streaming over unstable network conditions
- Client-side preprocessing to reduce bandwidth
- Server-side aggregation across multiple sensors
WebRTC handles the transport layer, while Node.js processes the audio server-side. Together, they form a practical foundation for marine monitoring dashboards.
Architecture Overview
┌─────────────────┐
│ Hydrophone + │
│ Underwater Bot │
└────────┬────────┘
│ (Raw Audio PCM)
▼
┌─────────────────────────┐
│ Edge Device (Browser) │
│ - Capture audio │
│ - WebRTC encoding │
└────────┬────────────────┘
│ (WebRTC stream)
▼
┌──────────────────────────┐
│ Node.js Server │
│ - Receive streams │
│ - Decode audio │
│ - Process/analyze │
│ - Store to database │
└──────────────────────────┘
Step 1: Set Up Your Node.js Backend with WebRTC Support
Install the necessary dependencies:
npm init -y
npm install express simple-peer wrtc wav fs
Key packages:
- express: HTTP server for signaling
- simple-peer: Simplified WebRTC peer connection
- wrtc: Node.js WebRTC bindings
- wav: Write audio to WAV format for storage
Step 2: Implement WebRTC Signaling Server
Your Node.js server handles WebRTC connection negotiation and audio stream reception:
const express = require('express');
const http = require('http');
const SimplePeer = require('simple-peer');
const wrtc = require('wrtc');
const wav = require('wav');
const fs = require('fs');
const app = express();
const server = http.createServer(app);
app.use(express.json());
let peers = new Map();
app.post('/signal', express.json(), (req, res) => {
const { peerId, offer } = req.body;
// Create peer connection with wrtc
const peer = new SimplePeer({
initiator: false,
wrtc,
offerOptions: { offerToReceiveAudio: true }
});
peer.on('signal', (data) => {
res.json({ answer: data });
});
peer.on('stream', (stream) => {
console.log(`Receiving audio from peer: ${peerId}`);
handleAudioStream(stream, peerId);
});
peer.signal(offer);
peers.set(peerId, peer);
});
function handleAudioStream(stream, peerId) {
const audioContext = new (wrtc.RTCPeerConnection.audioContext)();
const source = audioContext.createMediaStreamSource(stream);
const processor = audioContext.createScriptProcessor(4096, 1, 1);
let audioBuffer = [];
processor.onaudioprocess = (event) => {
const data = event.inputBuffer.getChannelData(0);
audioBuffer.push(...data);
};
source.connect(processor);
processor.connect(audioContext.destination);
// Write to file every 30 seconds
setInterval(() => {
if (audioBuffer.length > 0) {
writeAudioToFile(audioBuffer, peerId);
audioBuffer = [];
}
}, 30000);
}
function writeAudioToFile(audioData, peerId) {
const filename = `./recordings/${peerId}_${Date.now()}.wav`;
const writer = new wav.Writer({
channels: 1,
sampleRate: 16000,
bitDepth: 16
});
writer.pipe(fs.createWriteStream(filename));
const buffer = new Float32Array(audioData);
writer.write(buffer);
writer.end();
console.log(`Saved audio to ${filename}`);
}
server.listen(3000, () => console.log('Signaling server on :3000'));
Step 3: Client-Side Audio Capture and Streaming
On the underwater device (or browser), capture audio and send it via WebRTC:
const SimplePeer = require('simple-peer');
async function startAudioStream(serverId) {
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const peer = new SimplePeer({
initiator: true,
offerOptions: { offerToReceiveAudio: false },
stream
});
peer.on('signal', async (offer) => {
const response = await fetch('http://server:3000/signal', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ peerId: serverId, offer })
});
const { answer } = await response.json();
peer.signal(answer);
});
peer.on('error', (err) => console.error('Peer error:', err));
}
startAudioStream('underwater-bot-001');
Step 4: Real-Time Audio Analysis
Add signal processing to detect events (e.g., sperm whale clicks, equipment anomalies):
function analyzeAudioSegment(audioData, peerId) {
// Simple energy detection
const rms = Math.sqrt(
audioData.reduce((sum, val) => sum + val * val, 0) / audioData.length
);
const threshold = 0.1;
if (rms > threshold) {
console.log(`High activity detected from ${peerId}: RMS = ${rms.toFixed(4)}`);
// Store event metadata
logEvent(peerId, 'high_activity', { rms, timestamp: Date.now() });
}
// Frequency analysis (simplified)
const fftData = performFFT(audioData);
const dominantFreq = findPeak(fftData);
return { rms, dominantFreq, timestamp: Date.now() };
}
function logEvent(peerId, eventType, data) {
const log = {
peerId,
eventType,
...data
};
console.log('Event:', JSON.stringify(log));
// Store to database (Supabase, PostgreSQL, etc.)
}
Practical Considerations
| Challenge | Solution | |-----------|----------| | Bandwidth | Compress audio client-side using Opus codec (WebRTC default) | | Storage | Use chunked WAV files + database indexing for fast queries | | Latency | Keep audio buffer small (256-512 samples), reduce WebRTC jitter | | Multi-sensor | Run separate peer connections per device; aggregate in separate service | | Time sync | Use NTP on edge devices; correlate server timestamps |
Common Pitfalls
- Not handling stream end events: Peers disconnect; clean up resources properly with
peer.destroy() - Audio sample rate mismatch: Ensure client and server agree on 16 kHz (standard for marine sensors)
- Blocking audio processing: Use background workers or queue processing instead of blocking the main thread
- Unbounded memory growth: Implement ringbuffer for audio data; don't accumulate indefinitely
Integration with Data Storage
For production deployments, send processed events to a database:
const { createClient } = require('@supabase/supabase-js');
const supabase = createClient(SUPABASE_URL, SUPABASE_KEY);
async function storeAudioEvent(peerId, analysis) {
await supabase.from('audio_events').insert([
{
device_id: peerId,
timestamp: new Date(),
rms_energy: analysis.rms,
dominant_frequency: analysis.dominantFreq
}
]);
}
Testing Your Implementation
Use a tool like curl or ffmpeg to simulate audio input during development:
ffmpeg -f lavfi -i sine=frequency=440:duration=30 -c:a pcm_s16le -ar 16000 test_audio.raw
Then pipe to your capture logic for testing.
Next Steps
- Scale horizontally using Docker containers and load balancing (each peer on its own pod)
- Add ML inference with TensorFlow.js for pattern recognition
- Implement dashboards with real-time audio visualization (Plotly, Chart.js)
- Deploy on Kubernetes for reliable multi-region monitoring
Real-time underwater audio monitoring is now within reach for indie developers and startups. WebRTC's peer-to-peer foundation combined with Node.js processing gives you the low-latency, scalable architecture marine applications demand.