Sunday, July 8, 2018

Mastering - Why Not to

*************

Mastering – Why Not to

“Mastering degrades the timbres, and is nothing but ‘Loud-Garbage-Out’.”

There’s a term ‘GIGO’ in mastering parlance, used to describe audio files that arrive with no headroom for a mastering engineer to work with; hence, Garbage-In-Garbage-Out. But today, and much to the chagrin of those very sound engineers I am sure, who have all made much money destroying the music in music, I make the statement that makes the headline. Now let me assure you; like every big statement I’ve ever made, I will present facts in light of commonsense to support it.

A quick tip:

Before I start with the nitty-gritty of what digital sound, sample rates, bit depth, headroom, and mastering work truly involves, let me point you to an article about mastering that I did a few years back. I explained what plugins you may want to use, and why use them, when mastering a track. Now add to that information, this tip:

“If you render out a sound file with ample headspace, and you import it into a new project in your DAW, then you can layer the file any number of times to raise the volume of the final product, apply effects like reverb to the master chain or any other chain, or in general, master individual chains by filtering out and working on very specific frequency ranges.”


Now let me add this information to the above tip, that I have myself tried this method on more than one occasion, and achieved exactly the same results in sound as any big commercial release, yet I have not released even a single song with that sound. I just didn’t like that sound! Call it a musicians’ ego if you want to, but there were good reasons (details below).

Getting a head around headspace:

Mastering needs 3-6 decibels headspace for the engineers to work their magic. But what really is this headspace; just lowering the volume of the mix (or individual tracks) down? At a more basic level; what is ‘Digital Sound’ really?

1.    What is Sound:
Any sound, from any source or instrument, is a mixture of audible frequencies, with bits scattered all over the spectrum, from the lower levels towards the higher levels. A few peak frequencies give a source its’ distinct voice. Try a low pass filter on hitats, and see how far below can you drop the shelf before you stop hearing any sound at all. Try it on a piano or a pad! You will know how each sound is a mix of many wavelengths and frequencies, each adding their own ‘timbre’ to the overall ‘timbre’ of the sound. Now consider a sound mix, like a song, where many sounds have been mixed together to create one sound. Do you think, with all their frequency ranges intertwined, you can really separate the instruments completely using any filters, equalizers, or the like?

2.    Analog Sound:
An analog sound recording is a continuous record of all the wavelengths and frequencies that make up a sound, along with their amplitudes, much like a line-graph. When recorded on a medium like a tape, it is played back in a flowing continuity.

3.    Digital Sound:
Digital sound, for example the industry standard PCM wave, on the contrary, is a disjointed record of the sound; that is, it is made up of chunks of data. From afar it appears like a line graph, but a closer inspection reveals its dotted make; the quantization (see below).

4.    The chunks of digital data:
Consider for example a ‘one second long sound sample’, made up of 16 different wavelengths. This sound’s analog record with include all the values that those 16 wavelengths will assume, as a part of their rising and falling amplitudes, over the course of one second. However, digital data is recorded in bits and bytes, which are not the same as a continuous line drawn on a piece of paper. Digital sound is recorded by breaking that continuous one second sound data, into many smaller chunks; for example, 44,100 chunks of a single second (44.1KHz), or 48,000 chunks of that second (48KHz). These chunks are called samples, and hence ‘sample rate’ defines how many samples per second a wave file contains.

Then of course, these chunks themselves have chunks of data; for our current example, all 16 wavelengths will be present as 16 small pieces, each with a range of data, albeit a much smaller range. Say if you chop a piece of pipe into smaller pieces, each piece would still have two ends, and water will still take time to pass from one end to the other. But a ‘Digital Sound’ can only be recorded using a set number of data points, with each point stating a precise value; for example, each sample can be described by using 16 (our current example), or 24, or 32 markers, with each marker having a single precise value (and not a range of values) for that particular sample. The number of markers so used to describe the details of a sound sample, they represent the depth of detail; the bit-depth.

Thus a one second sound chopped into 44,100 pieces, will be represented by that many blocks of data, with each block’s data encoded using 16, 24, or 32 markers. But what does each one of these markers really represent?

5.    The data markers:
In our example, let us assume that each marker is associated with one out of the 16 waves. Now each marker will represent the average amplitude of that wave’s piece in that sample. Thus the range between highest and lowest values would be lost in favour of one single average value. What does this means? This means, the analog wave that looked like a continuous line, gets converted into a dotted line in digital world, which would look continuous from a distance, but dotted up close. This is what is described as quantization error, as intervening values are lost in favour of single average value. In practical world, 50-60 KHz sampling rates yield results that can simulate an analog sound for the purpose of human hearing abilities. Anything beyond is non-detectable overkill. And that is why 48 KHz (more continuity in sound) is a good sample rate to render, with a 24 bit depth (more headspace).

But what has all this got to do with headroom, clipping, and mastering? Or wait; what did I just explain up there?

1.    The working of bit depth:
Suppose you have 16 one litre bowls to fill (16 bit information per sample), and a bucket full of water (a song made of many sounds). You pour water in each one of them to different levels (this is what your 16 bit wave sample would look like). Imagine each of those bowls represent a bit that is attached to a section of the audible frequency range, say bowl one to 20-100 Hz, bowl two to 100-300 Hz, and so on so forth. When you are creating a song, each instrument will add its’ sound (or water in our example) to more than one bowl; (because each sound is a mixture of many wavelengths). The bowl that would overflow, would lose water (that is, the amplitude for a particular frequency/wavelength section, that goes beyond what the system is capable of mapping, generally 0 db tops, that amplitude’s true value would be lost, and the resulting average information recorded wouldn’t be accurate, and would sound as a distortion, called clipping, and besides, the average value for extreme variations is generally inaccurate information). Remember how when you are producing a song, and all sounds are peaking well below the 0db level, and then you add one sound that just goes way over the charts and the master chain starts recording clipping; yeah, that is one bowl clearly full beyond full, and that full bowl is what the master mix is recording as error.

2.    Dealing with near full bits:
So when you send a song, with all the bowls full, or nearly full, there is nothing left for the engineers to add to that sound (of course, what they add is what I despise, as described below). So when a mastering engineer, say for example, needs to jack up the midrange, and that range’s bowl is already full, they have no choice but to empty some of it, to create space for the jacking up (layering and filtering tricks can help here). But of course, as you already know now; every frequency segment contains frequencies from more than one sound source. So you end up with one sound getting improved (allegedly), while many others take the hit. And this is why it is called GIGO!

On the contrary, if you give them plenty of room in the bits, to work with, the mastering engineers can really jack up the levels that need to be jacked up, without destroying anything else (allegedly, once more).

Putting timbres on timbre:

Of course, in a perfect world mastering engineers would be able to add or remove only the frequencies that need to be so, but unfortunately, in real world these frequencies, as already noted above, are intertwined. This is the very reason why the mastering engineers also don’t want you to add any effects to the song you dispatch to them. All the effects, like reverb, flanging, chorus, delays etc, they all add newer frequencies to the sound, covering an even bigger range, and complicating the maze. Now remember; you are sending a mastering engineer a single complex sound, a song, and not individual tracks that they could work on, and then mix together to create a gorgeous product for you to claim as your own artistic achievement. This is what makes mastering a good intentioned job done to create a poor end product.

1.    The amazing sound timbres you create:
All the gorgeous sounds that you create or mix, while creating a composition, they all have some very particular dominant frequencies. It is the sound of these frequencies, and how they mix with the rest of their own non-dominant frequencies, that makes them appealing to you. Each sound may have one or two peak frequencies, but the rest are not just absent, but are rather pushed out of prominence, in a prominent yet gradual way; consider a wave that stays low for a long time, then rises up sharply, and falls down quickly, and finally fades out slowly again. This is what all sounds are; some just have a smaller foot print in one or the other direction of their peak.

2.    The timbres that mastering adds:
When a mastering engineer is working with a mix of sounds, they may limit their impact to a particular frequency range, with an intention to improve a sound peaking in that range. Unfortunately however, whatever they do with that range, will also impact other sounds that might as such be peaking in other ranges, but have some bits of their limbs in the impacted range. Suddenly the overall character of those impacted sounds changes. Suddenly the ‘hihats’ may start sounding trashier even if you had tuned them to sound soft and rounded. The ‘kick’ may start sounding duller, even if flatter. The ‘bells’ may become rough, and ‘whistles’ and ‘pads’ might become heavier or shriller. The list is endless, and now you know why I haven’t released any of my songs with a commercially mastered sound in spite of my understanding of the subject matter.

This is what my dear friend, I call LGO. You lose the charm of the sounds that you create, only to get a louder product. So, do you really need mastering for your compositions? Well, this question is better answered by asking these questions:
a)    Can you increase the volume of your song enough by layering some elements of the song (experience will teach you which ones)?
b)    Are there really any sounds that matter so much to you that you would rather not master the track?
c)    Do you rather want a commercial sound than pamper your artistic ego?
Answer these for yourself and you can figure out whether you need mastering in a particular instance, or not.

Fatal Urge Carefree Kiss

*************

Thursday, July 5, 2018

The Art of Compositions

*************

The Art of Compositions

“A Composition is a home in which many a song may live.”

A Composition is a construct, a complete building, in which more than one song may reside at a time; a perfect example would be the famous song from an all time hit Bollywood movie ‘Maine Pyar Kiya’, the song being ‘Antakshri’. This particular composition is a house in which scores of older Bollywood tunes reside as if they were always a part of it, and yet it is all but one Musical Composition. To put the point precisely;

“When one is composing a piece of music, they are constructing a structure of which a song may only be but a small part. There could also be introductory prelude, intermediary phases, independent melodies, and closing postlude, to complete the shape.”

Of course, the simplest of compositions are all but one song long. But what really goes into making such simple constructs; is far from simple.

For the ease of explanation, a simple composition can be broken down into two parts: melody, and accompaniment. Be sure however, that barely scratches the surface of what really goes into making one. The nuances of composing get complicated like a maze when one starts dibbling into rhythms, melodies, lyrics, accompaniments, riffs, vamps, hooks, and ostinati. What do all these really mean for a composer, and how do they really fit in with each other?

There are three ways a composition may come to life; from lyrics, from melody, and from a rhythm. There is no hard and fast rule as to which method is better that the others, yet individual composers have their own preferences. A good composer however can deploy either one of these three methods according to their needs, and create a respectable work. So to be a good composer one needs to know what these three most important parts of a composition mean, before even considering their relationship with everything else that can go into making a complete composition.

1.    Lyrics: Generally a song is born from an idea or a phrase, when the lyricist goes about filling in all the gaps left behind by it; that is, creating the body of the song. Lyrics can also be written by converting the notes in a melody into words, or lining phrases to a rhythm. I generally start with a phrase, or even less; I’ve written an entire song from the word(s) ‘Time-Machine’, just by staying true to all the meanings that it could convey, and then developing one meaning into a story of its’ own. In a song, the lyrics represent the melody part of the song, and they may or may not be supported by an instrumental melody in the composition. Compositions however, can also exist without any lyrics at all, and even melody as such. Rhythm and accompaniment may suffice to create a composition.
2.    Melody: A melody, like lyrics, can be the sole source of an entire song, or be there in supporting cast. A melody can thus be a complete idea, that is, the lyrics or meaning of the song; but could also be a musical representation of the lyrics, a complementary piece, or a contrast. When lyrics are converted into notes, the melody is merely a musical representation of the words. When a melody is complementary in nature, it could complement either the lyrics or the accompaniment; that is, it is a musical phrase that copies neither, but goes well with either. When a melody is in contrast, then of course, the intention is to heighten the emotions identifiable from the lyrics. As a general rule, however, a melody cannot be in contrast to the rhythm, simply because rhythm is there to lower or enhance the feeling, but not to identify an emotion. Such contrasting melodies are used in electronic dance music a lot, where the lyrics flow smoothly until the contrasting melody is introduced over a vibrant rhythm.
3.    Rhythm: The single most important aspect of the composition is its’ rhythm. The feeling that a composition generates, is really felt through its’ rhythm, while the attached emotions are identified by the lyrics and melodies, the latter duo thus being there only to give a name to the experience, or heighten its perception. For example, a pulsating beat gives one the feeling of excitement, while lyrics and melodies can help associate that feeling with fear, energy, happiness, anger, or something else. Even a piece with no use of percussion and drums, has a rhythm to it. Rhythm thus is the ‘felt’ part of the music, while lyrics and melodies are the ‘perceived’ part. In fact, a different rhythm can give a very different feel to the same lyrics or melody.

A good composer can start with either of the above three aspects of a composition, but once when the actual construction begins, the first element to be put in place happens to be the rhythm.

1.    Creating the Rhythm: The very first considerations to be dealt with are tempo, scale, syncopation, and note patterns; generally in that order. When the method deployed is that of “Rhythm to Composition”, then these issues are merely a matter of choice for the composer, but when the method deployed is one of the other two, then the answers to these queries are provided by the lyrics or melodies themselves. Consider the rhythm that I have used in my song ‘Time-Machine’; a simple ‘di-chi dhee-chi dhee-chi rest, dheeee-cha-chaa’. This rhythm easily decipherable from the way the lyrics are sung. I may have started with the lyrics, but converting those lyrics into a beat was the first step to get them feeling groovy, and give the composition a starting framework to build upon. A simple kick drum and snare can generate that rhythm, but that rhythm feels empty. Surely, Hihats can fill some gap, and ‘rest’ is also an integral part of the composition, for it enhances the meaning and perception of melodies and rhythms, but still the rhythm is empty without other elements; together referred to as accompaniment.
2.    Bringing in the Accompaniment: Chords, Percussion, Pads, Bassline, and Chorus, these are elements that build upon the rhythm to create a complete structure.
a)    Chords: The first element that adds weight to the rhythm is a chord pattern (or a note pattern, or a mix of the two in some instances). A soft sounding instrument, like a piano played at pianissimo, or a wind instrument, like a Saxophone (as in the case of ‘Time-Machine’), is added to either complement the basic rhythm, or to copy it. For example, in House music, a 4/4 chord pattern on the beat copies the beat, while off-beat, it complements it. In ‘Time-Machine’ the Saxophone sound both complements, and copies the rhythm.
b)    Percussion: Shakers, claps, finger snips, cowbells, and all, can help fill some gaps by complementing the rhythm, or copy the rhythm like the Chords above; an example would be the use of 4/4 claps in House music.
c)    Pads: These are generally used to smoothen out the rhythm, so that the gaps in the rhythm, chords, and percussion pattern don’t sound jarring. But they can also serve an important role in heightening the perception of lyrics, melodies, and rhythms, by adding a supporting voice underneath.
d)    Bassline: This generally gives a deeper feel to the rhythm, by tying together all the different sounds making up the rhythm and accompaniment. They do so generally by imitating the combined sound generated by the rhythm and the accompaniment, but can on occasions do so by adding a complementary touch. Occasionally these may assume an entirely different role in addition to this, which is discussed below.
e)    Chorus: Chorus voices, vocals or instrumentals, are generally there to fill the gaps in both rhythm and lyrics or melodies, and in the process, add to the character of the composition. They can copy rhythm, copy lyrics or melodies, or assume their own distinct identity.
3.    Creating the Identity: While a rhythm supported by an accompaniment is enough to create a composition, without the need of lyrics or melodies, yet to make it last longer in the memory of its audience, it needs elements that exist within the realm of the latter duo. These elements often become the defining features of particular compositions, and audience remembers and identifies these compositions through their defining elements. The best classified amongst these elements are hooks, riffs, vamps, and ostinati.
a.    Hooks: Hooks are the catch-lines of a composition; the sounds that hook a listener to a tune, much like a fishing hook. These could be melodies, like the complementing ‘Bassline’ driven melodies of songs ‘Satisfaction’, ‘Like a G6’, or the flute from the song ‘Dil Deewana’ from Bollywood movie ‘Maine Pyar Kiya’. These could be also be vocal phrases, like the word ‘Panda’ from the song ‘Panda’ by ‘Desiigner’. Their sole purpose is to give the composition its’ distinct identity that would hook the listener to the tune.
b.    Riffs: Riffs are musical phrases that are generally used to transition between one part of the composition to another, one melody to another, or one phrase to another. They serve to signal a change. A good example would be the sounds of violins, keypads, pianos, that are generally used when one lyrical line ends, and before the next one starts, or the sounds used when the verse ends and the chorus is about to begin, or the interlude ends and a new verse is about to begin. Thus in effect, they usher in what follows. The distinct chord sound used in ‘Empire State of Mind’ serves as a riff each time Alicia Keys is singing. In the Bollywood song ‘Dil Deewana’, mentioned above, each lyrical line is dotted by a flute riff, and then finally by a violin chorus. The tingling sound that I used in my song ‘Bankrupt in Love’ is another example. But that is not the least of the fun part with riffs. Riffs can sometimes take up the role of hooks, and even drive an entire composition. Examples would include Led Zeppline’s song ‘Whole lotta love’, or from India, a very recent Punjabi song ‘Hathyar Varga’. Such songs have riffs that act as hooks, as well as drive the entire song. They are riffs, and not hooks or vamps, because they are very short musical phrases, which in their normal usage would have served as transitional phrases, as described above.
c.    Vamps: Vamps are continuously repeating phrases that occur at the beginning of a musical composition. Vamps were designed to allow time to the lead vocalist to get ready and join in. In electronic music, the contemporary of Vamps is ‘Loops’. When you play an unedited House music track, the looping music that plays at its beginning, the one whose purpose is to assist live performing DJs in beat mixing and matching, is serving the function of a Vamp. Bon Jovi’s epic song, ‘It’s my life’; the guitar and bass riff that opens the song, could very well be used as a Vamp in live performance of that song. I personally use a modified version of Vamps, to introduce most of my songs; the long preludes. These are Vamps because they are readying the ground for the real song to kick in, but modified because they generally play a melody or a hook, but only once.
d.    Ostinati: In classical music, ostinato (plural: ostinati) is a phrase that persistently repeats itself in the song, generally without changing in pitch, and without variation. Thus both riffs and vamps are modern contemporaries of classical ostinati, but have assumed specific roles. Ostinati meanwhile occur throughout the length of the song, and can be present as a prime melody, or prominent accompaniment component. Donna Summer’s ‘I feel love’ is an Ostinato driven track. The flute hook of the song ‘Dil Deewana’, mentioned above, actually covers three roles in that song; it is the hook of that song, it has been used as a riff too, and it could also be considered the Ostinato of that song.

A good composer, like a good soldier, knows the strength and weaknesses of the weapons at his disposal, but a brilliant one knows how to improvise. What sets people like Indian Maestro A.R. Rahman apart from the crowd is their ability to create magic out of nothing; for example, Rahman’s use of the sound train makes while running on a track as part of the melody, as part of the hook, of his song ‘Chaiya chaiya’ is a great example of modern compositional improvisation. Rhythms can themselves generate interesting melodies. What limits a musician is their imagination, and an artist with limited imagination is no artist at all.

Happy composing!

Fatal Urge Carefree Kiss

*************