Figuring out audio normalization
Posted: Tue Feb 17, 2026 7:20 pm
Volume for SFX and music in my game is a bit all over the place. I know audio normalization is a thing and it's a thing I should probably be doing but I tried doing it in Carrot Survivors and I just couldn't figure it out. Anyway, it's time to do some research and figure this out for real so I thought I'd share my investigation ^^
This reddit comment was useful:
This reddit comment was useful:
Off to a good start, using the mematic reddit "$PROFESSION here," intro. But the advice here looks solid. Apparently there's two kinds of normalization and it's very likely I had been doing it all wrong because I remember using the same script for music and sound effects, which means I was probably doing the wrong thing to at least half of my audio. We have:Sound designer here, I suggest you to learn a bit of reaper to batch normalize sample.
[Edit] clarified my answer
For short sounds (like UI clicks, impacts, or brief effects), peak normalization is actually more appropriate since LUFS measurements are designed for longer content and can be less reliable on very short samples.
For longer sounds (music, ambiance, longer effects), LUFS normalization is the way to go rather than peak or RMS, as it better represents how humans perceive loudness.
Reaper can handle both types of normalization in batch, which is what makes it such a powerful tool for sound design workflow.
Quick tip: while these measurements are great starting points, always trust your ears for the final balance, especially for short sounds where frequencies and transients can significantly affect perceived loudness!
- Peak normalization, for SFX
- LUFS normalization, for music