Back button

Back to Die Wolke's home page

Monday, November 16, 2015

Not exactly mix tips

Last month was a lot about mixing for me. After mixing Imago in late September, which was a very calculated and protracted process, I got into the exact opposite with the kind of real-world-pace, swift, mixes, for some of my own stuff but mostly others’. In addition, I came across a number of “mix tips” articles online, with the ever-present debates and arguing of the finer points in comments. So I thought I’d do my own, sort of, and I’m commandeering Die Wolke’s blog to do so.

This is not “the 6 best tips you’ll ever receive about mixing”. It’s not specifically beginner or advanced level advice, and it may or may not work for you. It’s simply a few short (or not so short) thoughts on mixing, based on how I feel right now. If you’re following my work, you’ll find countless examples where I contradict myself, and that’s fine: some concepts don’t always apply, other times exceptions are in order and, of course, mixing technique is kind of a changing state for everyone. I’m moving forward as much as anyone else. So, without further ado…


1. Contrast


This is perhaps the most useful notion in mixing. So many times we all get drawn into those intricate detailed mixes, where even a slight change makes them “unstick” or fall apart completely. The problem is if that happens, it’s extremely likely that the mix will break down just by being played on different speakers, and we don’t want that! Therefore, I’m at that sort of phase where I like big differences in dynamics and timbre. This means, if a certain part needs to be louder than another to make musical sense, I’ll make it so that it is significantly louder, and not just a bit. Also if something is meant to be brighter than something else, I’ll exaggerate that difference. Bearing in mind that slimmer mixes are easier to accommodate this way, you can end up with a mix that will sound very similar in a variety of speaker systems. That’s because the listener will perceive the relative differences easily enough. So, it doesn’t matter really if a set of speakers is particularly band-limited: a large difference is much more likely to be heard and support the music. This method works better when not over-compressing or limiting the master though…

The point of contrast is this: contrary to the publishing materials of, say, mic or console manufacturers, there’s no such thing as big sound. It’s all an illusion of sorts. Sound is big when listened to in conjunction to another sound that is smaller. You want to make your vocal sound huge? Just make something else in the mix be tiny by comparison: you’ve just established size as an acoustic variable. Same thing goes with brightness, as well as distance, that I’ll go into in detail in the next paragraph. To clarify: I don’t mean to devalue hi-end gear… actually I do own some myself, and like everyone else would like to own more. Sure, part of getting a huge sound might be an appropriate mic and, say, a suitable compressor, eq, etc. But those things in themselves will certainly not make anything sound big, fat, or anything else, for that matter.

[note: Certain low end gear, though by no means everything, exhibits some critical flaws. There is really no substitute for quality tools. That said, I dislike marketing hype. I do own a couple of items that are cheap, yet high end in my opinion, for example. They still don’t make things sound big, though they can help me do it]


2. Spatial separation


A related concept, a contrast across the implied space, so to speak. We have all been guilty of over-elaborate spaces, detailed reverbs that sound “just right”, extremely careful placements and panning, myself perhaps most of all. The same contrast principle is true in this case, as the listener will virtually never listen to proper stereo and all that “language” will be lost to her/him. What I try to do is define a few planes of distance: I’ll have my up-front sounds, my background sounds, maybe a middle layer. There is no fixed number of how many planes should be used, but it should not be one (that would be a surefire way to a boring mix) and more than four would be pushing it. Three is a nice figure I guess, but there’s no hard rules here (even the 1-plane “flat” depth will have its use at times).

Of course, placing elements front or back in the mix is not as simple as panning, and some knowledge of psychoacoustics is really helpful here. Without going very much into it, I’ll mention that sounds are brighter and louder up close, while bigger and smoother in the distance (think perspective here, like in a photograph). Consider what kind of detail should be audible at your implied distance, and get your eq’s and reverb responses to match that. Predelay in reverbs will be of some help. Compression even more so, primarily through transient shaping that affects timbre. It’s also worth noting that some situations are paradoxical: you can’t have a “big” sound up close… in reality it would be quite deafening.

Of course, this is a well-known and loved (by some) effect, also known as an “in-your-face” situation, where you seek to create precisely this: a sound that is too big and too close-up (hint: distortion, like cracking compressors, is a huge part in that, maybe because that’s how real ears would react in the physical world, before being torn apart by the sound pressure). Think of this as an exception, not the rule, and try to use it sparingly, maybe for a section and not an entire mix. It can work in conjunction to our previous point regarding contrast quite well. Listening fatigue will be a factor here, so keep it short, otherwise a listener will surely turn it down, annoyed by the assault on his auditory system. Or maybe I’m getting too old.

Back to the point, there exists of course the right/left axis as well. This is easier to define in the foreground plane, less so in the background. There are many strategies regarding panning and all have their merits. One thing that I no longer like is tiny increments, like minute panning differences that create the illusion that sounds are slightly off each other in the mix… that only works in the studio, acoustics and monitoring permitting! Actually, the last few years, I’m becoming a fan of LCR, the system in use before pan pots were common, where you have only three available positions: left, right, or both (centre). It sounds crazy when you start, but, by the end of the mix, using your background planes as “glue”, it starts to sound incredibly open. Stereo mic setups, like stereo room miking, really work with LCR, by panning them hard and letting the natural phase differences plug the horizontal holes. Of course, mono is also a perfectly viable method. It depends on the musical style, as well. One thing to remember is that sounds will mix nicely if they sound individually articulated as intended both when panned together, as well as apart. So, panning is not really a method to clarify mixes, it’s there to help bring dimension and volume.


3. Monitoring limits


There’s a ton on calibrating your monitoring system online, everything from k-weighting, fixed spl references, abstract tips like “mix loud” or “mix quiet” at such and such time, a lot on Fletcher-Munson curves, and more. And all that is helpful, and nice to know, truly. One little method that I use, when I want to make a critical adjustment, is change the monitoring level. Doesn’t matter in which direction. Sometimes, all you need is a change of perspective. So, I move a critical fader, say the drum bus, and immediately change the level around 20dB down, and finish or evaluate my fader move there. I find it helps a lot, your mileage may vary.

Another good piece of advice that is common is the “know your speakers” deal. Nobody can argue with that, but I’d like to be a bit more specific: I like to know exactly how my speakers sound when pushed hard (by the content, not just by turning them up excessively). In other words, how does really loud music sound on them? How harsh do the tweeters get? How forward can the midrange be projected? Is there a point where the low end distorts somehow? These are not just responses to high playback volume, though that is part of it. It is also about how the speakers render the work of other engineers that strive to emulate loudness (remember, loudness is a subjective psychoacoustic quality, you can have a loud sound played back quietly, and it can still sound loud in comparison to something else in the mix… contrast, remember?). This helps when compressing stuff, I find: You know your limits, how hard can you push before it sounds bad. Personally, if I just go by my gut, I tend to mix a touch softer that I intend, whereas I could get away with being a bit more extreme, doubtless because the Focal Twins that I have sound very defined and sharp to begin with, so pushing them a lot can make them feel unnecessarily harsh. But, listening to other records, it turns out I can get away with a little bit of that sense of the midrange scratching your eardrum.


4. Mix buss compression


Quick poll: is it bus or buss? I don’t really care, to be honest, I use both spellings interchangeably. To the point: I love working with a mix buss compressor. It is one of these things that beginners are advised against, because it’s so easy to screw up, and then is forgotten along the way, creating an aura of fear or taboo around the subject. My recommendation: do it, and tread carefully. The first trick is to turn it on at the right time, which I find to be when all the tracks are cleaned up, mixed sort of neatly at appropriate relative levels, everything else routed, and any effects introduced. Compressing the mix buss at that point will afford the opportunity to make up for any of its adverse effects in the mix, something that is not possible if you leave this up to the mastering engineer. So, the gist of it is to set it with respect to your loudest element, which you should leave relatively constant in amplitude from that point on, and then see how things change when you push other channels into the compressed mix. Chances are you’ll be pleasantly surprised, once you get the hang of it.

So what are the criteria for judging mix compression? Like everything else in audio, each case will be unique. But as a rule of thumb, light compression is much more common than heavy. Really light more common still. So don’t look for huge reductions, these rarely help. What’s really important, as with any compressor, are the time constants: attack and release. Generally you want the release as fast as you can get away with without the low end becoming too modulated. That will depend almost exclusively on the material (tempo, instrumentation, and so on), so if you see any specific numbers online, disregard them! Deliberately slowing down the release further may cause pumping artefacts though; that may or may not be desirable. If your compressor has auto release, try it: it is a unique behaviour of two simultaneous release circuits, resulting in a very smooth release profile that is often very appropriate, though by no means always. Notice: “auto” does not mean that the compressor makes the release decision for you! It’s just a multi-stage release that depends on the material it’s being fed.

The attack, similarly, is a bit like a waveshaper: slower attack times will let the transients through and make the mix brighter and clearer; whether the mix warrants that is of course something to be determined on a case-by-case basis. Too fast attack times might make the compression obvious. Again, set to taste. Don’t use the mix bus comp to reduce gain, i.e. to make the mix louder. There will be some of that, but it’s best to think of this tool as a “bonding agent” of sorts. If this is your first time, tread lightly!


5. The high pass conundrum


This is one of the most debated subjects in such mix tip lists: many advise that the mixer removes extraneous low end from almost all tracks using high pass filters. The more experienced engineers immediately argue against this in the comments. In truth, whether to use a HP filter or not will always need to be a critical decision. There are three reasons (that I can think of from the top of my head) that a HP filter might be a bad idea, even if it doesn’t appear that the track in question should have energy in those regions: (i) the filter itself will adversely impact the sound, (ii) in many circumstances, such as in stereo setups, there are subtle time difference cues that can be ruined by even slightly improperly set HP filters, and (iii) sometimes energy below the fundamental of the source can be modulated with the source, like the sound of air hitting the diaphragm, certain varieties of proximity effect, or intermodulation effects when multiple sources are being recorded, and these may actually be enhancing artefacts.

To discuss these phenomena is really beyond the scope of this post. Suffice to say that no, you shouldn’t set a HP filter on everything but the kick and bass (assuming your music does have kick and bass…). Certainly don’t set using a real-time analyser. You can try to set it, and then listen to the sound both solo and in context, with any compression and EQ that you might apply, and try to determine which sounds better. Sure, low end rumble and noise do eat some headroom. But it’s not the end of the world… sometimes, a shelf filter with 2-3 dB’s of reduction might do a very good job, as can a peak filter set really low, with the Q parameter then doing the work. The difference here is the amount of reduction. Of course, these filters may sound objectionable too… Statistically speaking, tracks that are close up and prominent in the mix are less likely to be tampered with with HP filters (of course, these sounds receive little EQ in general too, if recorded properly, as they will often assume their natural perspective as picked up by, say, a close mic). Background sounds are more likely to get more processed, but there are very notable exceptions, like groups of acoustic instruments and the like, where filtering there might sound undesirable.

Sometimes, some really subtle low-frequency “air” can serve to establish a reference point in the acoustic landscape; it can work as an anchor relative to which to perceive depth. Highpassing in this case will be audible as a generalised loss of fidelity, completely unrelated to frequency and timing issues: it will be pure psychoacoustics. However, all of the above does not mean that HP filters are not useful tools: sometimes, all that is there beneath that fundamental is just low-end noise that is just that: noise! In such a case, highpassing can really improve the mix by clearing space for the low end of other tracks to sound through.


6. Fundamentals


Usually (but certainly not always), the fundamental frequency will be the loudest frequency of a sound. As such, it needs the most energy to be reproduced, and thus will eat away the headroom before anything else. It is worth reminding that the frequency of the fundamental is actually the frequency that listeners will identify as pitch. Most have heard of the phenomenon of the “missing fundamental”, i.e. how one can “hear” the correct pitch, even if the fundamental frequency is removed, such as in band-limited environments like a telephone. The implication here is that a fundamental can be cut, the pitch still be perceived, and lots of headroom thusly saved. This of course does not always work.

The principle is sound (pun intended): you can still identify the pitch, but the sound is nowhere near as nice as before. The observation, however, does open up a world of intermediate possibilities: how does subtly changing the relative amplitude of the fundamental, through EQ, affect the perceived quality of sound? It turns out that, most of the time, there exists a sweet spot between the fundamental and the first few harmonics that will make the sound feel richer, and that usually involves cutting it by a few dB’s. Note that it’s the final proportion that’s important, not the cutting in itself: like all EQ, it will depend on the source! Having a mix with lighter fundamentals means having a mix with more harmonics, as well as some headroom to spare. Of course, in a real-world scenario, most instruments will be playing a variety of notes, so you can’t just hit a specific frequency in a narrow-Q manner. There is some trial and error here which involves finding a compromise between the right centre frequency and Q versus any undesired cutting further up the harmonic spectrum: the more the instrument moves, the harder this becomes. Still, even if partially done, this can be a powerful way to make thin mixes appear substantially fuller. Given that fundamentals reside in the lower part of the spectrum, this is counter-intuitive to many, that would try to boost low end instead. The flaw in that reasoning is that thickness, fatness, richness, and so on, are psychoacoustic impressions that are indirectly affected by a signal’s spectrum; the internal harmonic relationship is a more important constituent (the other being transients and temporal behaviour, controlled and affected via compressors).

You can try it with shelving filters, an easy way of bringing the entire first octave up and down equally, but it’s equally likely that a good result can be obtained with a wide notch, using half of the bell curve to achieve a progressive sort of cut. Scan the entire part first and note down the range of pitches you want to affect: anything more than an octave and this technique becomes difficult, unless it’s fine for the effect to go away once the instrument moves to its second register (quite often the case).

An interesting effect can be heard if you reduce a bit of the second harmonic (the octave of the fundamental) as well… the sound will thin out a lot, but will become more aggressive and forward (odd harmonics, like the increasingly more dominant 3rd in this case, do that sometimes). However, it’s easy to overdo it and then the sound will become tinny and acquire a processed, fake quality. Alternatively, you can reduce just the area of the second harmonic and leave the fundamental alone. The perceived change in overtone balance is the common tip of “cut the low-mid to make things tight”. Being a bit more involved in how your cuts will translate to aural qualities in terms of physics will go a long way towards letting you control these effects. Again, since most instruments will play many notes over the length of a part, it is rare that you can implement a precise prediction, and trial and error (or simply acoustic feedback) is the indicated modus operandi, as opposed to math. So, the answer to all those questions like “should I cut the fundamentals?”, “by how much?”, “should I cut a hole in the low mids?”, and so on, really is just another question: What do you want to do? Why is the sound not working? If you try to characterise the sound and its current fault, you can try to estimate a direction in terms of physics of sound, and then experiment around that, to have the sound do what you want it to do.



I had a bunch more subjects in my list, but this is getting large enough as it is. Maybe I’ll do a part two at some point. Till then, mix away!