How to make close mics when there were no close mics. Adventures in machine learning pt1

How to make close mics when there were no close mics. Adventures in machine learning pt1


Kensal Clubdate & adventures in Machine Learning.

Kensal Clubdate was a funny one - I recorded it when I was an assistant in downtime at Hoxa when the studio was based in Kensal Rise/Queens Park. It was the first kit I made drum heads for and I made them WAY too thick - resulting in a totally unique drum sound.

My concept way back in 2014 was, super organic drums with a multitude of sticks, kind of pretty similar to how I was tracking stuff at the time. I also wanted to use the poshest of mics and I put AKG C12A's on the Toms (a nod to Eric Valentine!).


They were great, except the channels on the Console crapped out after the second kit and I didn't notice until after the session, totally my fault of doing everything by myself.

Unfortunately the studio was sold and turned into flats shortly thereafter and I later remade those drumheads properly... so none of that kit really exists in that form (I would quite like to record in the new Hoxa for v2 and have the option of both).

Whilst it's been totally fine and in the 4/5 years that instrument has been out, no one has complained about a lack of tom mics but I've always had a slight disappointment that Kits 3-8 didn't have close tom mics like Kit 1 & 2.

So I started to wonder - could I create a close mic? Mic wise I had a 'wurst' mic position over the bass drum sort of in between the toms and I used a u47 as a mono overhead?


My initial experiment was just grafting the attack of the brush or mallet on to the working sustain of the close mic and concatenating the whole thing - it was alright, but who wants drum sounds that are alright? Not me. So I parked it.

Recently I thought about machine learning - could I create a model that looks at the initial close mic recording and cross references it with both the wurst mic and overhead to create a close mic approximation with the different sticks. Sort of crazy right? And I think it kind of works...


How?

I ended up using python - using both numpy & scipy to alter the fft and soundfile py to work and create the audio.

I'm still figuring it out, it feels a bit weird having to essentially code something when you want to change an eq response and view stuff in a sort of sliced fft way or from a bit perspective. In a weird way it sort of feels a bit more like working on analog as you can't see the waveform and how you're affecting it.


An important AI stance.

I shan't be using it for anything creative, content, videos, photos I'm avoiding it with all my strength, they'll be no majetone chatbot anytime soon. No 'good news try the new AI helper'. BUT In this context I think it's an exciting and a new frontier doing something that wasn't totally possible before. Is this a disgusting idea? Is it fraudulent to create a model in this way for a sample instrument? Keen to hear any thoughts. I'm not totally sure this will make it to the final thing, but it struck me as an interesting new thing to get ahead of. 

I've got a few ideas of things I'd like to try and experiment with and to expand this further:

High on my list is to try and make a reverb plugin of that room for no reason other than my own nostalgia, I took some crap impulses when I was there but I think training a model to create the impulses from recorded material will be really great. I mean, it's a total tangent, but I'm really enjoying experimenting with it. I will post audio examples once I've refined it a bit...

Thanks for reading!

James


 

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.