It is most probably a by-product of evolution that we are able to discriminate relevant sounds, which might announce important events like approaching danger, from the background noise.
I was drinking my afternoon coffee on the terrace, safely hidden under a fatigued awning finished with black and white stripes. I reached for a next sip and froze. A maelstrom of light glittered on the coffee surface as I lifted the cup. As I swayed it, the reflections would divide and distort creating even denser kaleidoscopic patterns. In that moment, I knew that soon I was going to recreate this effect and capture it ‘on tape’.
A couple of days later I rolled out the awning, filled a tray with water and hit the record button. The results were extraordinary. I fed the footage into Live, ran different vibration frequencies at different speeds and mixed it.
The audible clicks resembling high voltage electrical discharges are actually birds chirping on the nearby plane-tree as I was conducting this experiment on the terrace facing the Pyrenean landscape. I loved this sound from the moment I discovered it and to my bewilderment it wasn’t until listening to the footage in the studio. I decided to enhance the clicks, intertwine them with the arrangement and put in the final mix. It is truly astonishing how our brains filter out the multitude of ambient sounds that build up the rich soundscapes that we call silence.
Filtering out the noise
This phenomenon is caused by the hierarchic organisation of the stream of processes that flow through the brain. The prefrontal cortex, our ‘decision center’ is unaware of most of the processes the brain has to calculate on a daily basis. This arrangement is not only giving us a chance to react accordingly in the face of an emergency but it also makes for a very efficient power-saving strategy. This way one doesn’t have to be bothered by the speed of hair growth or frequency of intestine contractions in the midst of a boar encounter in the woods.
Ambient sounds, a collection of air vibrations produced by our environment which we are familiar with, make for a ‘safe soundscape’ or ‘background sounds’ and thus can be processed ‘subconsciously’, rarely reaching the prefrontal cortex. If they do reach it however, it is likely that either our temporal lobe noticed something peculiar about them, or that we are musicians or field-recording artists who just won’t leave a faintest rustle unregistered. All the sounds that get into the brain through the ear canal, after being transformed into a neurochemical code, are directed by the central auditory pathways to the temporal lobe. Here, all the filtering occurs, with all incoming information being evaluated and ‘tagged’ as the sounds that should get our immediate attention, or the irrelevant noise. The information tagged as significant goes to the prefrontal lobe and we suddenly become ‘aware’ of it.
By-product of evolution
It is most probably a by-product of evolution that we are able to make a clear distinction between relevant sounds, which might announce important events like approaching danger, and the background noise. Michio Kaku, in The Future of the Mind compares the way a brain functions to that of a large corporation. In this analogy which I find quite accurate, the prefrontal cortex would be the CEO of the company.
Most information is “subconscious”— that is, the CEO is blissfully unaware of the vast, complex information that is constantly flowing inside the bureaucracy. In fact, only a tiny amount of information finally reaches the desk of the CEO, who can be compared to the prefrontal cortex. The CEO just has to know information important enough to get his attention; otherwise, he would be paralyzed by an avalanche of extraneous information.¹
For us, it means the comfort of uninterrupted focus on whatever we are currently doing. We can forget about the humming of a distant highway, cicadas’ song or wind howling in the street and concentrate our attention on creating, playing sudoku or having a conversation.
When the patterns became repetitive, I decided it’s time to modify the technique and introduce some variations in the frequency of vibrations. I plugged in a loudspeaker, turned it cone-up with the tray safely taped atop, turned the volume up and generated some sound waves. The patterns suddenly gained complexity, bringing in entirely new quality.
At this stage, the basic principle of this setup could serve as a simplified model of how the brain processes auditory signals. In this analogy, the loudspeaker would be an external sound source, the thin tray bed – a tympanic membrane and the tray content – a fluid-filled cochlea located deep inside the inner ear. The process would look like this: air vibrations travelling through the ear canal (loudspeaker’s diaphragm) would be transformed into mechanical stimuli by the tympanic membrane (tray bed) and pass through the fluid-filled cochlea as hydraulic energy. In our experimental audiovisual model, this is the moment that we would observe the ripples and rosettes on the liquid surface as the acoustic waves put the tray bed in motion. At this point, our model ends. In the brain however, the newly created hydraulic energy would then stimulate the cochlea’s hair cells, which in turn would fire a neurochemical signal and finally, excite the hearing nerve.
Auditory part aside, the resulting images are truly mesmerising.
More to come soon. Did you like this story? Share it on facebook.