Mark Radu, the man with the golden ears, was our live sound engineer for the pilot show. On the night he created a full, rich, immediate sound, and this mix became the temp track which we edited to and immediately slapped onto our ipods and laptops where it has been in heavy rotation ever since.

His brief bio on the website of PA-Plus, the company he works for, states he is known for going the extra mile and this was certainly our experience of Mark. The artists he has engineered live sound for is staggering, ranging from AC/DC to Willie Nelson with stops along the way for Justin Timberlake, Destiny’s Child, Prince, Rufus Wainwright – to name just a few from the incomplete list of 261 names he’s still piecing together.

Mark kindly unpricked his ears and took a few minutes off from knob twiddling to answer our questions about creating the great live sound for The Side Street Project.

The day seemed particularly challenging for you: Three sets of musicians playing in rotation, sometimes collaborating, and performing songs that, for the most part, you hadn’t heard before – yet you produced excellent live sound. What was your approach to capturing the sound, where do you begin?

I managed to listen to a few tracks from a couple of the Artists in the weeks leading up to the event. This, if nothing else, just gave me a sense of what the Artist was about and offered a bit of feel as to what I could expect on the day. Most larger tours spend weeks in pre-production. Everything is rehearsed, perfected and repeated night after night. Mixing a show like this soon becomes a cue-to-cue existence. When it comes to mixing a unique event like this I just went back to the basics…sound reinforcement. Let the Artist lead, just follow along and reinforce them in the mix as needed. I didn’t try and over engineer anything, I just followed the instrumentation and the dynamics of the performance and helped out wherever I could, keeping things very raw and intimate sonically to match up with the visual atmosphere created by the producers…

It was fascinating watching your process of deciding where to place the audience PA – what were the factors that you had to figure out?

This was kind of a unique challenge. As mentioned above, the show was very intimate with the audience completely surrounding the stage. Aside from the obvious criteria of covering the audience, I wanted to make sure that I could overlap the coverage of speakers from different points. This allows the audience to hear the mix from more than one given reference point. Secondly, I had to ensure that all of the speaker elements could be properly time aligned with the stage even though they were focused in different directions.

What does ‘time aligned’ mean?

Delaying the output of the nearer speaker so that it arrives to the listener at the same time as the farther one. The overlap in coverage allows for a more ambient style listening environment, while properly time aligning the speaker system to the stage draws the audible focus of the audience to the performers as opposed to the speakers.

Any tips for budding sound folk out there?

Technology moves along pretty quick. We’ve gone from analog devices to a vast array of digital devices and one would be hard pressed to do a large scale event nowadays without a couple of notebook computers wired into the audio system. With this in mind, you have to remember that sound is still physics, pure and simple. No matter what new toys or gadgets come out to manipulate the way we process or produce sound, you can’t change the laws of physics. My best advice would be to get yourself a solid understanding of the physics of sound. This knowledge will never become outdated…

Thank you, Mark.