Probably my favorite technique I’ve stumbled on for analyzing audio evidence is synchronized, saturated audio. What is it? It’s the practice of recording using as many audio recorders as possible, then synching the audio streams together while reviewing evidence.
What is the Advantage?
How many times have you been investigating a location and heard a strange voice or sound? Was it someone outside the building? Was it paranormal? How can you be sure? The fact is that walls muffle and change sounds.
What if you had an audio recorder running outside the building or in several rooms, and what if you could combine all your recordings, synchronize them in time, and isolate individual tracks while you were reviewing evidence?
Does it work?
Unequivocally yes! SciMinds has encountered many situations that have invalidated evidence and strengthened evidence.
Team members heard what they described as a person gasping for their last breath. They were close. I had a horrible cold at the time and upon synching their recorders with mine, it was clearly one of my coughing spells which could be heard faintly from their location.
In Willows Weep I was listening to what my handheld recorder was picking up, when I heard a voice that sent chills up my spine. After synching all of the audio streams, it was determined that my recorder was the only recorder to have picked up the sound. There was even another recorder just a few feet away that did not record the voice.
Electronic Voice Phenomenon (EVP) differs from disembodied voices in that it may not be audible at the time. It is recorded directly into the circuitry of the recorder. Synchronized audio can prove that a voice is not audible and thus is an electronic phenomenon.
How to Synch Audio?
I hope to create a separate blog detailing the specifics of how to synchronize streams, but I will give a broad overview here. It really only consists of two parts.
First, you need a loud noise to synchronize recorders that are spread out in various locations. It has to be loud enough to be heard by all recorders, and ideally it should create a spike on the waveform representation of the audio in your editing program. Once every audio recorder is started, a loud clap or bang of some sort is made.
Second, you need to have an audio editing program that allows you to offset tracks in the timeline. Certainly you can have multiple people standing around holding each recorder who all press record at the exact same time, but this isn’t practical, so usually one track will start recording before another. This means the first recording to have started is inserted into your editing program first, then the second is staggered, and so on. The clap or bang that synchronizes them all is lined up in all tracks and the tracks are synched.
Yes, it takes some work, but yes it’s worth it. I will post more blog entries expanding on this subject later as well as offer an incredibly immersive alternate use for this technique.