Sampling & Synthesis – Glossary

Frequency– Frequency is the number of times a wavelength occurs in a single second, and with music we measure frequency in Hertz or Kilohertz, also known as cycles per second. The faster the sound vibrates the higher the frequency, this suggests the sound would be at a higher pitch. Humans are limited to a certain range of frequencies, starting at 20Hz to 20Khz. Some examples of low frequency sounds could be, thunder, a bass drum, or a male voice and examples of high frequency sounds could be a birds chirp, a glass breaking or a female voice. – Example of high frequency. – An example of a low frequency sweep starting at 1Hz to 35Hz


Wavelength– Wavelength is the distance between any point on a wave and the equivalent point on the next phase. It defines the actual length of the wave. The shorter the wavelength, the higher the frequency will be which means the pitch will be higher than a longer wavelength. Each vertical line, in a waveform, from left to right represents a pitch, or frequency, in Hertz.

Amplitude– The amplitude of sound is a measure of its change over a period of time. Peak to peak amplitude is the measurement of the change between the highest and lowest value. Amplitude is often understood as the volume of a sound, but it specifically refers to the level of impact on the air pressure caused by a sound wave. The theory is that when an AM signal is transmitted, there are two waves. One is called the carrier wave which remains at a constant amplitude and frequency. At the transmission stage of the sound waves process, the other wave, or non carrier signal, varies the wave according to the sound being transmitted. So the degree to which the signal modulates in relation to the carrier wave determines the sound that comes out of the receiver. This means that the modulated signal causes the speaker to move at different rates, and these variances in movement cause different sounds to project from the speaker.

Sound is perceived by the movement of air, so when a sound wave is generated by vibration, the amplitude is the parameter used to measure its influence on the air. A louder sound will have a higher amplitude than a low sound, however this does not mean amplitude is directly related to volume. Factors such as environment and interference can make a high amplitude sound less audible than a lower amplitude sound.


Fundamental– The lowest resonating frequency of something is called the fundamental frequency. The fundamental frequency is also called the first harmonic of the instrument or sound wave, it is the the frequency with which the waveform completes one ‘cycle’. Each natural frequency of an object or instrument produces, has its own characteristic vibrational mode or standing wave pattern. These patterns are created only within the object or instrument at specific frequencies of vibration. These frequencies are known as harmonic frequencies, or simply harmonics. At any frequency other than a harmonic frequency, the resulting disturbance or distortion of the medium is irregular and non repeating which relates to non integers. For musical instruments and other objects that vibrate in a regular and periodic fashion, the harmonic frequencies are related to each other by simple whole number ratios or Integers. This is the reason why instruments sound pleasant.

Harmonics– One of the most noticeable differences between two sounds is the difference in pitch, it is the frequency of the sound that mostly determines its pitch. When we hear a single note we are concious of its pitch but we are unconscious of many other things that are going on, because a musical note consists of many different unnoticeable frequencies known as harmonics. These harmonics all merge to form an impression of one pitch or fundamental, the rest blend in to reinforce and colour the main note. Pythagoras notices that when a vibrating string on a guitar was stopped by hand halfway along the length, the pitch went up an octave. If he stopped at a quarter length, it went another octave higher, at an eighth, another octave higher. If stopped at one third the length of the string, the pitch was an octave plus a fifth higher than the fundamental pitch. With frequency and harmonics, every time a number is doubled it goes up an octave. So when a string is plucked on a guitar, all of its harmonics will be vibrating simultaneously. At any one frozen moment the shape of the string will exhibit a very complex wiggly shape which results in all of these simultaneous vibrations, the result is one pitch with a particular tone colour. A plucked G string on a violin will not sound the same as a plucked G string on a viola or guitar because they have different tone colours and the harmonic series will determine them. This is because the various harmonics are not all the same strength.

Integer– This is whole number frequency of harmonics. The ear is sensitive to ratios of pitches rather than distinguishing musical intervals. The intervals which are perceived to be most consonant are composed of small integer ratios of frequency. Example:

The octave, fifth, and fourth are the intervals which have been considered to be consonant throughout history by essentially all cultures, so they form a logical base for the building up of musical scales. A typical strategy for using these universally consonant intervals is the circle of fifths. The term musical integer refers to a step up or down in pitch which is specified by the ratio of frequencies involved. For instance, an octave is a music interval defined by the ratio 2:1 regardless of its starting frequency. From 100Hz to 200Hz is an octave, as is the interval from 2000Hz to 4000Hz.  The intervals which are generally the most consonant to the human ear are intervals represented by smaller integer ratios. Intervals represented by exact integer ratios are said to be just intervals, and the temperament which keeps all intervals at exact whole number ratios is Just temperament.

Examples of just musical intervals: 2:1 octave
3:2 fifth
4:3 fourth
5:4 major third
6:5 minor third

Integer Multiple– Integral multiples of fundamental frequency are called overtones where as the fundamental frequency is itself, a harmonic. Harmonics and overtones are both integral multiples of a fundamental frequency. An integer Multiple is the overtone above the fundamental frequency. Its frequencies being added onto the fundamental frequency, hence why they are called overtones. Therefore we can say that the fundamental frequency is the first harmonic or integer multiple by one. The double frequency or integer multiple by two is the second harmonic and the first overtone. higher harmonics and overtones will follow the same pattern in numbering therefore a formula could be written as (n)=(n+1)

Subtractive Synthesis– Subtractive synthesis is often referred to as analogue synthesis because this is how the synthesisers were used to make patches. The process of subtractive synthesis is very simple as it goes: Oscillator – Filter – Amplifier. This means the sound is generated by the oscillator and frequencies are then subtracted with a filter and finally the signal goes through the amp envelope. Subtractive synthesis means taking away elements of a wave to create a new wave or sound and then control the loudness over time.

 This picture shows the frequencies being blocked out with the frequencies being allowed through the signal chain. This is the fundamental idea on subtractive synthesis.


Music Analysis – Textural Components

Reverb is a great textural component in music because it gives the sound atmosphere by causing a large number of echos to build up and then slowly decay as the sound gets absorbed by the walls or air, creating a natural feel of the instrument because it is imitating space and time with regards to the time it takes for the sound to travel across the room and also how big the room is. Of course I’m speaking with reference to Virtual instruments being processed by a reverb effect within a DAW.

However using real instruments in different rooms or spaces creates a natural Reverb, this has a more natural feel to it, and a great example is playing an instrument out in an open field compared to playing an instrument win a small room with objects placed around the room. because the field is so open and has no restrictions such as walls, the sound could feel a lot lighter giving the sound more space to travel, compared to a small room with objects would give oof the feeling of a tight sounding reverb because there is less space for the sound to move and the sound waves now have objects and walls to reflect off and also get absorbed by. This determines the type of Rverb you would get for the specific instrument.

The interval between the initial direct arrival of a sound wave and the last audible reflected wave is called reverberation time, and is a key feature when speaking about reverb, because it is the time required for reflections of a direct sound to decay. The optimum reverberation time for a space in which music is played depends on the type of music that is to be played in the space. Rooms used for speech typically need a shorter reverberation time so that speech can be understood more clearly. Reverberation effects are often used in studios to add depth to sounds. Reverberation changes the perceived harmonic structure of a note, but does not alter the pitch.

With the sound wave having many directions to go toward, there will be late reflections as well as early reflections, this is because some objects may be closer to the direct sound than others thus creating reflections of itself, adding to the reverb sound.

The first Early Reflection reaches the listener milliseconds after the direct signal does. The path of the Early Reflections is longer. The difference in time between the arrival of the direct signal and the first Early Reflections is measured in milliseconds. The sound reflects off the walls and objects in the room, and in time individual reflections disappear and the Reverb develops.

The time between the direct signal is heard by the listener and start of the Reverb is called Predelay. This is a parameter in many digital reverb effects, and it is expressed in milliseconds, it refers to the amount of time between the original dry sound, and the audible onset of early reflections and reverb tail. Carefully adjusting the pre-delay parameter makes a huge difference in the “clarity” of a mix. For example, a longer pre-delay will move the reverb tail out of the way of the vocals, making them much more present and understandable. Here is a picture demonstrating the different reflections of the reverb.

Early Reflections

Both my songs were recorded in a studio, with my first song being Pink Floyd the textures are more harmonic and natural suggesting a homophonic texture. The way they would’ve added a more natural feel to is by adding reverb to certain instruments to give the environment of the song a more natural feel, because sound is absorbed in a studio and doesn’t bounce off the walls i would say the sound is a more flat or fake sound so adding reverb makes it more real, like the guitar would be given reverb to give the feel that the song was recorded in a large environment with lots of space to give it that airy and melancholic feeling. By improving the reverb you tweak the room size pre-delay and dampening, which would control the late and early reflections, really making it sound like it was recorded in that specific room type such as a large hall or field. The second song was recorded in a studio because its an electronic styled piece of music, that means it was created using a DAW, therefore reverb is vital in giving it atmosphere especially since its a trance track and trance is all about atmosphere. The song would suggest being either polyphonic meaning theres more than one leading melody or homophonic with one leading melody and a bunch of background sounds backing up the melody.

There are four main types of textures. The most common one found in most modern music is homophonic, this has multiple voices of which one, the melody, stands out prominently and the others form a background of harmonic accompaniment.

The most basic textural type is monophonic. Monophonic texture includes a single melodic line with no accompaniment. Another textural type is heterophonic which is two or more voices simultaneously performing variations of the same melody. And the last main textural type is Biphonic which is Two distinct lines, the lower sustaining a drone (constant pitch) while the other line creates a more elaborate melody above it.


A disc jockey or DJ is a person who mixes recorded music for an audience. Today, the term includes all forms of music playback, no matter the medium. There are several types of disc jockey. Radio DJs, who introduce and play music that is broadcast on AM, FM, shortwave, digital or internet radio stations. Club DJs select and play music in bars, nightclubs, or at raves, or even in stadiums.

Today digital DJs are very common all over the world, and a lot of companies have developed programmes to help with mixing and djing. The programme imitates a dj setup with two or four decks and a mixer for EQing. There is a MIDI input for controllers to physically control these parametres virtually. A list of DJ Programs are:

Native Instruments Traktor Pro 2 – Traktor just about comes out top thanks to its endlessly flexible performance features and its all-round intuitiveness and reliability. Traktor comes in two forms: Traktor Pro, which can be controlled using a traditional mouse and keyboard or one of an endless list of hardware DJ controllers, and Traktor Scratch, which is designed to be used in conjunction with one of Native Instruments’ digital vinyl setups.

Serato Scratch Live/DJ – Scratch Live is probably still the most popular digital vinyl system out there, but it’s only available when bought in conjunction with an official Rane audio interface. Serato DJ is the latest version of the company’s controller-orientated software, and it’s the first version of Serato that can be used with any MIDI controller.

Ableton Live – Ableton Live wasn’t initially designed as a DJ application; when the software first launched in 2001 it was pitched purely as a DAW or digital audio workstation, and marketed as a piece of music production software. It quickly proved immensely popular with DJs, however, thanks to its unique Session View – a window dedicated to launching synchronised loops and the intuitive way it handles the retiming of audio files.

Atomix VirtualDJ Pro 7 – For many years VirtualDJ had a bad reputation, which could be traced back, in part, to the fact that its earliest incarnations were relatively, toy-like DJ applications. Recent versions, however, have become very impressive and have been able to compete with the bigger names. Notably, version 7 can support up to 99 decks and can live-sample incoming audio.

Pioneer Rekordbox – Pioneer’s free Rekordbox application doesn’t actually handle any mixing, rather the app, which Pioneer describe as an “iTunes for DJs”, is a tool for prepping and managing audio files ahead of DJ sets. The software can be used to analyse tracks to discover their BPM, view waveforms, set cue points and edit track information, all of which can be read by Pioneer’s CDJs, which are the industry standard in clubs around the world.

Here is a list of DJ controllers used to work with the DJing software:

  •  Numark Mixtrack 2

    Numark Mixtrack 2

    • Native Instruments Traktor Kontrol F1

      Native Instruments Traktor Kontrol F1

    Numark Mixtrack Pro 2

    • Numark Mixtrack Pro 2

    • Native Instruments Traktor Kontrol Z1

      Native Instruments Traktor Kontrol Z1

    • Native Instruments Traktor Kontrol X1 Mk2

      Native Instruments Traktor Kontrol X1 Mk2

      • Novation Launchpad S

        Novation Launchpad S

There are many more virtual DJ software and controllers however I’m more geared towards using A CDJ and Mixer setup, like standard venues and parties around the world use. I will discuss the different functions as well as key aspects of each piece of equipment. For the DJ set I will be using the Pioneer CDJ 2000s, the Pioneer DJM 850 mixer and the sennheiser HD 25 headphones. I feel that its necessary to be more understanding of these, which is why I will be discussing them.

The Pioneer CDj 2000s:

The main features of the 2000 is its compatible with Music CDs, CD-R, CD-RW, DVD±R, DVD±RW, DVD±R-DL, this means it can take many different types of format CDs such as mp3 format CDs or WAV CDs. USB Memory Devices, SD Memory Card, WAV, AIFF, MP3, AAC files on CD and DVD. The MP3 MPEG can read from 132Kbps – 320 Kbps / MPEG-2 16 Kbps – 160 Kbps, this means the quality of the files can range from132 killer bytes per second to 320 Killer Bytes per second. It has MIDI Control and ProDJ Link to link up to virtual software like Traktor. it comes with Pioneers Rekordbox which helps prepare for DJ sets. A big win for all DJs is that it has Anti Vibration, meaning the loudness of sound from the club speakers will not affect the CDJs or the Mix.

The specifications for the 2000s are very good and it shows why these are industry standards. The Frequency Response is 4 Hz ~ 20 kHz, this means the machines can pick up frequencies ranging from as low as 4Hz (Hz is the measurement of frequency) to as high as 20KHz. The Signal to Noise Ratio 117 dB. It has very little Distortion: 0,0018 %. The Audio Output Level: 2.0 V rms 1kHz, 0 dB. Very little Power Consumption: 28 Watt, this helps with duration of use. Fairly normal Power Requirements: 220 – 240 V, 50/60 Hz this suggests the compatibility of power connections. The Dimensions: 320 x 106.4 x 405.7  mm. The Net Weight 4,6  kg , fairly light in weight which is good for mobility.

The Mixer I will be using is the Pioneer DJM850.  This mixer combines incredible technology with the DJ’s favourite features and effects.

It has four channels, one hundred effect combinations and endless creative possibilities, meaning there is endless creative possibilities for a DJ to use during a set. The DJM850 is ready for plug and play, with preconfigured, studio-quality effects accessible at the touch of a button, this makes it convenient while djing. The mixer rivals different software’s creative possibilities converting DJs from their laptops and allowing for a more natural DJing experience. I prefer this way as in my opinion its the “Real” way to DJ, by real I mean the more classical way to DJ, using a controller and laptop frees up a bit of time for the DJ to focus more on other things like Mashups and adding more effects. What a mixer is, its basically a sound card receiving signal from the CDJ inputs as a platform to mix the tracks being played, you can have very basic Mixers that are designed to do its job and only that, excluding the features of effects and other various things that the DJM850 has. If a DJ wants to be more creative with his music it would be better to have a Mixer that has more features allowing for versatility during their sets.

The DJM850 boasts an integrated 4-channel- high-performance sound card, enabling simultaneous input and output with 24-bit/96 kHz processing, so there is no deterioration of sound quality as it passes through the mixer.

With three sampling rates (96 kHz/48 kHz/44.1 kHz), the DJM850 can be used for music recording and production as well as expert DJing. Connection to PCs or laptops requires just one USB cable, so DJs can start mixing their stored music immediately. And software devotees can make the most of scratch control thanks to the mixer’s compatibility with the timecode feature on Digital Vinyl Systems, such as Traktor or Serato. And with Pioneer’s handy utility tool, it launches as soon as the DJM850 is connected to a computer, allowing DJs to configure the mixer, sound card and audio routing according to their personal preferences. This is great for creating that personal feel for DJs, I understand that its always better or easier for a DJ if they feel more comfortable on the equipment, so this is a really good feature for the DJM850

An incredible feature that’s an industry first is The DJM850 harnesses the power of the highly popular studio technique of sidechaining to add a new element to Pioneer’s wonderfully simple Colour Effects without making them any more complicated to use. By engaging the Beat button, another dimension of control is automatically added when you apply the Colour Effects. The Beat Colour Effect ‘listens’ to the audio input of each channel and directly connects the rhythmical changes in volume to another parameter: resonance for Filter and Crush, beat repeat for Cutter and ducking volume for the Noise Effect. This extra layer of control has become a staple of dance music production because of the way it blends new sounds and effects perfectly into the mix.

The mixer also offers DJs 13 enhanced Beat Effects. The DJM850 inherits the incredibly high-quality Reverb and tripped out sound of the tape echo-inspired Spiral Effect. The DJM850 introduces Up Echo, which produces a more reserved and controllable mix. The ‘FX Boost’ function allows the Level/Depth knob to work more proactively than a standard Wet/Dry Control. Turn the dial to the 12 o’clock mark for a fully wet effect and turn it further to add Pitch Up to Up Echo and a High-Pass Filter to the Reverb. Each channel is home to a three-band equalizer (+6 dB to -26 dB) or three-band isolator (+6 dB to -∞dB).

The incredible sound reproduction with reduced noise interference is due to the DJM850’s top-of-the-range components: a 32-bit output D/A convertor; a 32-bit digital signal processor; the separation of analogue and digital circuitry; and the shortest possible transmission path. This means it transfers 32 bits of data a second which is really high and gives off really good quality audio. An excellent extra feature is that the mixer is Fully MIDI assignable, the DJM850 also serves as a MIDI controller. The USB port is conveniently located on top of the mixer so DJs can easily switch between connectable devices.

Now that the main equipment has been discussed and reviewed and that I’ve had experience on these machines I can definitely say that they live up to their reputation, they are incredible with sound quality and quick with transferring information, they are extremely fun to play on and have amazing features that any DJ would enjoy using.

However if you do not have a good set of Headphones your mix will not be as powerful as it should be and this is because if you have bad headphones you will have bad quality cueing, due to the headphones not having a good frequency response meaning some of the frequencies from the tracks will not be heard and beatmatching the tracks could be a bit of a struggle, also long duration to loud sound especially with bad headphones is never healthy.

I will be discussing the Seneheiser HD 25-SP II’s. I will be discussing these because they are said to be one of the best headphone sets a DJ can use and many of the top DJ’s around the world have given a lot of praise to these headphones.

The HD 25-SP II are closed, dynamic headphones, they are extremely comfortable, featuring a minimalistic headband and a unique capsule design. The main features include being very lightweight and comfortable, even if used for long periods. They have High maximum sound pressure level, which suggests they have a good frequency response (30 Hz – 16 kHz) with a 65 Ohm nominal impedance for universal compatibility. Its physically tough, detachable OFC cable, this is good for durability especially with DJs being on the move constantly. they are specially designed to fit on top of your ears and not just cover your ear, this is useful in helping the ear hear a crystal clear sound to help with cueing while DJing

Musical Analysis – Sonic Components

When speaking about sonic components within music, you are referring to the concepts of sound spectrum, frequency range, equalization, dynamics, sound contrast, mix, panning, and effects processing. These will be the things discussed in the following Blog, I will explain the concepts and underline main aspects to each while trying to discuss contrasting ideas between the concepts stated.

There are different types of sonic components, there is: Natural components, which is the analysis of sound in nature and urban environments, meaning the different ways sounds can be heard depending on the environment your are in, this suggests that the placement of objects in and around a certain room will have a different effect on the sound or it would manipulate the sound in such a way due to the natural feel of the sound bouncing or being absorbed by the object in the room, a great example is of a drummer drumming in different environments creating various reverb within the different environments. Frequency Range and sound spectrum’s are part of Natural components as these elements are found in natural sounds and they are ways of measuring the frequencies of these sounds.

Frequency Range

A frequency range is characterized as a periodic vibration whose frequency is audible to the average human. It is the property of sound that most determines pitch and is measured in hertz (Hz). The standard range of audible frequencies is 20 to 20,000 Hz, although the range of frequencies individuals hear is greatly influenced by environmental factors. Frequencies below 20 Hz are generally felt rather than heard. Frequencies above 20,000 Hz can sometimes be sensed by young people. High frequencies are the first to be affected by hearing loss due to age or prolonged exposure to very loud noises.

Sound Spectrum

A sound spectrum displays the different frequencies present in a sound. Most sounds are made up of a complicated mixture of vibrations. A sound spectrum is a representation of a sound in terms of the amount of vibration at each individual frequency. It is usually presented as a graph of either power or pressure as a function of frequency. The power or pressure is usually measured in decibels and the frequency is measured in vibrations per second or hertz,  or thousands of vibrations per second or kilohertz.

Sonic Field

In music there is something called the Sonic field, this is an invisible plane described only in time and space with regards to the audio being heard. The sonic field includes components such as Mix: which means creating an equal balance between instruments in a song and sculpting a song to have a certain feel. This involves careful adjustments of the levels of instruments and EQ. The Balance of the stereo field means placing instruments in a certain way to create certain depth within the sonic field either by panning instruments or lowering the volume to create a distant feel, sometimes reverb is used to create that distant feeling of the instrument, like its being played in the background. This includes blending certain instruments to give off a certain clarity, generally used to emphasize a specific point in the mix. Some mixes sound flat and boring. Others give the illusion that you’ve been enveloped in a vast 3D universe where every sound and every track exists and flourishes in its very own sonic space.

Effects and Sound Processors

Effects are electronic devices that alter how a musical instrument or audio source sounds. Some effects subtly “colour” a sound, while others transform it dramatically. Effects are used during live performances or in the studio, typically with electric guitar, keyboard and bass. While most frequently used with electric or electronic instruments, effects can also be used with acoustic instruments, drums and vocals. Examples of common effects units include wah-wah pedals, fuzzboxes and reverb effects. These effects are all hardware effects and are generally used as sends within the mix, meaning the sound or effect is only added when programmed to.

There are also many types of virtual effects used on a DAW which include, distortion, overdrive, flanger, echo, delay, reverb and chorus, there are many more effects and these are all used to manipulate or “colour” the sound. These effects are considered to be time based effects, and dynamic based effects are effects that control frequencies such as compressors, filters, gates and limiters. Effects are a great way of adding character to the mix and help create a range of dynamics and certain frequencies to help make the mix stand out rather than blend in all together.

I use a mix of different dynamic based effects, mainly compressors and gates, the gates are used in a way to give off that melodic uplifting melody in mixes and i use side chain compression to create unique blends in the different instruments or sounds, and the time based effects are used to give off that spacey feel within the stereo field, things like pin pong delays and reverbs are used in my mixes to help create space.


SONG 1: The instruments used are guitar, bass, piano, violin, chimes, hi hats, kick drum, synths, trumpets and vocals.

The Hi hats have been panned left and right at different times to creating a sweeping beat from left to right, the vocal has been panned slightly to the left to create focus on it while freeing up space to hear the other instruments. The electric guitar has been made stereo to create that rounded space and the trumpet has been placed in the centre to create a focus point for the mix, meaning it gives the mix a centred lead sound so the ears can follow the song easily. Kick and Bass has been placed in the usual spot as the middle low end of the mix, and various synth sounds and effects have been placed either far left or right to create more character in the song. The instruments are blended together to give off that airy feel, making the mood very melancholic and slow. The differently panned instruments are placed in this way to add to the ambiant surrounding space within the track.

The frequencies of each instrument:

Piano – 27Hz to 4.2kHz

Electric Guitar – 82Hz to 1.2kHz

Violin – 200Hz to 1.3kHz

Chimes – 200Hz to 12kHz

Trumpet – 165Hz to 1.2kHz

Male Vocal – 120Hz to 16kHz

The different effects have been EQ’d meaning frequencies have been taken away from the instrument to free up space on the sound spectrum for the other instruments with those frequencies, this is done because if one instrument is dynamically louder than another, the frequencies are being squashed together so by EQing an instrument you make way for the frequencies to come through

Song 2: This song has been made using a DAW and therefore not many instruments are present in the mix. He uses synths to create the various sounds in the mix, the only instruments present are the drums and percussion. This is a Trance track and therefore has been mixed accordingly, making certain elements appear further away than others and having the bassline playing right in your face to help distinguish what sounds to “follow”. The different sounds have also been EQ2d to fit the frequency responses of each different sound therefore making the mix more clearer.

Music Analysis – Musical Components

Elements of Music:

Sounds may be perceived as pleasant or unpleasant. To have a better understanding we need to ask, what is “sound”? What are these sounds that we hear? What causes it, and how do we hear it?

Sound, begins with the vibration of an object, such as a table that is pounded or a string that is plucked. The vibrations are transmitted to our ears by a medium, which is usually air. As a result of the vibrations, our eardrums start vibrating too, and signals are transmitted to the brain. There the signals are selected, organized, and interpreted.

The 11 Musical components I’ll be discussing and explaining are; Arrangement, Structure, Tonality, Harmony, Timbre, Dynamics, Rhythm, Melody, Texture, Tempo, and Instrumentation 


An arrangement is the adaptation of a previously written musical composition for presentation. It may differ from the original form by re-harmonization, paraphrasing or development of the melodic, harmonic, and rhythmic structure. Arranging is the art of giving an existing melody musical variety.

In popular music an arrangement is a setting of a piece of music, which may have been composed by the arranger or by someone else. Most commonly, this is a matter of providing instrumentation for the songwriter or composer’s basic melody and harmony. It may add details omitted by the composer, or it may replace those originally given and be merely based on the original work.

In classical music an arrangement is a setting of any composition for a different medium other than the one for which it was created: e.g. a piano piece may be arranged for full orchestra, or an orchestral composition may be arranged for solo piano. Often arrangement involves considerable reworking of the original material, in conformance with the resources of the final medium. An arrangement may specify or vary some or all of:

  • Harmonies, including parts
  • Instrumentation
  • Style, dynamics and other instrumentation to the players
  • Sequence, including the order and number of repeats sections such verses and choruses, and provision of sections to be improvised by instrumentalists
  • Introduction, coda, modulations and other variations


Structure is the musical form of a musical composition. The term is used in two senses: to denote a standard type, or genre, and to denote the procedures in a specific work. The terminology for the various musical  types may be determined by the medium of performance, the technique of composition, or by function. The proper perception of a musical work depends on the ability to associate what is happening in the present with what has happened in the past and with what one expects will happen in the future. The fulfilment of such expectations and the resulting tensions and releases are basic to most musical works.
Musical form depends, on the disposition of certain structural units successively in time. The basic principles can be learned from a brief consideration of melody, which may be defined as an organized succession of musical tones. This succession of tones consists of component parts, the principal of which the phrase, a complete musical sequence, roughly corresponding to what can be sung or played in one breath or played with a single stroke of the bow. The relation between these component phrases is important for form. There may, for instance, be a complementary grouping of phrases as antecedent and consequent or “question and answer”. 
Tonality in music, is the principle of organizing musical compositions around a central note, the “tonic”. Generally, any music periodically returning to a central, or focal, tone exhibits tonality. More specifically, tonality refers to the particular system of relationships between notes, chords, and keys, sets of notes and chords, that mostly dominated Western music from 1650 to 1900 and that continues to regulate music heard around the world today. Tonality is sometimes used as a synonym for the related concept of key. Sometimes called major–minor tonality, this system uses the notes of the major and minor scales, which comprise of five whole tones and two semitones. Within each key there is a specific hierarchy of strong and weak relationships of notes and chords both to the keynote, or tonic note, and to the chord built on that note, the tonic chord. Different keys are also closely  related to the principal, or tonic, key. In this system of tonal relations, the notes and chords within a given key can create tension or resolve it as they move away from or toward the tonic note and chord. Likewise, any modulation or movement away from the tonic key creates tensions that may then be resolved by modulation back to the tonic. The potential for contrast and tension seen in the chord and key relationships of tonality became the basis for 18th-century musical forms such as the sonata.
In music, harmony is the use of simultaneous pitches, tones, notes, or chords. The study of harmony involves chords and their construction and chord progressions and the principles of connection that govern them. Harmony is often said to refer to the “vertical” aspect of music, as distinguished from melodic line. Counterpoint, which refers to the interweaving of melodic lines, and polyphony, which refers to the relationship of separate independent voices, are sometimes distinguished from harmony.
Most harmony comes from two or more notes sounding simultaneously. However, a piece of work can imply harmony with only one melodic line by using arpeggios. Many pieces from the period for solo string instruments, such as Bach’s Sonatas and partitas for solo violin and cello, convey subtle harmony through interference rather than full chordal structures. These works create a sense of harmonies by using arpeggiated chords and implied basslines. The implied basslines are created with low notes of short duration that many listeners perceive as being the bass note of a chord.
In music, timbre, also known as tone colour or tone quality, is the quality of a musical note or sound or tone that distinguishes different types of sound production, such as voices and musical instruments, string instruments, wind instruments, and percussion instruments. The physical characteristics of sound that determine the perception of timbre include spectrum and envelope.Timbre is what makes a particular musical sound different from another, even when they have the same pitch and loudness. For example, it is the difference between a guitar and a piano playing the same note at the same loudness. Experienced musicians are able to distinguish between different instruments based on their varied timbres, even if those instruments are playing notes at the same pitch and loudness.

Taken From – (WIKIPEDEA) “Timbre has been called, “…the psychoacoustician’s multidimensional waste-basket category for everything that cannot be labeled pitch or loudness.”  Many commentators have attempted to decompose timbre into component attributes. For example, J. F. Schouten (1968, 42) describes the, “elusive attributes of timbre”, as “determined by at least five major acoustic parameters”, which Robert Erickson (1975) finds, “scaled to the concerns of much contemporary music”:

  1. The range between tonal and noiselike character
  2. The spectral envelope
  3. The time envelope in terms of rise, duration, and decay (ADSR—attack, decay, sustain, release)
  4. The changes both of spectral envelope (formant-glide) and fundamental frequency (micro-intonation)
  5. The prefix, or onset of a sound, quite dissimilar to the ensuing lasting vibration

“Erickson 1975, 6 gives a table of subjective experiences and related physical phenomena based on Schouten’s five attributes:””

Subjective Objective
Tonal character, usually pitched Periodic sound
Noisy, with or without some tonal character, including rustle noise Noise, including random pulses characterized by the rustle time (the mean interval between pulses)
Coloration Spectral envelope
Beginning/ending Physical rise and decay time
Coloration glide or formant glide Change of spectral envelope
Microintonation Small change (one up and down) in frequency
Vibrato Frequency modulation
Tremolo Amplitude modulation
Attack Prefix
Final sound Suffix


In music, dynamics normally refers to the volume of a sound or note, but can also refer to every aspect of the execution of a given piece, either stylistic (staccato, legato etc.) or functional (velocity). The term is also applied to the written or printed musical notation used to indicate dynamics. Dynamics are relative and do not refer to specific volume levels.

The two basic dynamic indications in music are:

  • p or piano, meaning “soft”.
  • f or forte, meaning “loud”.

More subtle degrees of loudness or softness are indicated by:

  • mp, standing for mezzo-piano, meaning “moderately soft”.
  • mf, standing for mezzo-forte, meaning “moderately loud”.

Beyond f and p, there are also

  • pp, standing for “pianissimo” and meaning “very soft”.
  • ff, standing for “fortissimo” and meaning “very loud”.

To indicate an even softer dynamic than pianissimo, ppp is marked, with the reading “piano pianissimo” or pianissimo possibile (“softest possible”). The same is done on the loud side of the scale, with fff being “forte fortissimo” or fortissimo possibile (“loudest possible”).


Rhythm, in music, the placement of sounds in time. In its most general sense rhythm is an ordered alternation of contrasting elements.
Unlike a painting or a piece of sculpture, which are compositions in space, a musical work is a composition dependent upon time. Rhythm is music’s pattern in time. Whatever other elements a given piece of music may have element patterns in pitch or timbre, rhythm is the one indispensable element of all music. Rhythm can exist without melody, as in the drumbeats of primitive music, but melody cannot exist without rhythm. In music that has both harmony and melody, the rhythmic structure cannot be separated from them. Plato’s observation that rhythm is “an order of movement” provides a convenient analytical starting point. 
A melody is a linear succession of musical tones that the listener perceives as a single entity. In its most literal sense, a melody is a combination of pitch andrhythm, while more figuratively, the term can include successions of other musical elements such as tonal color. It may be considered the foreground to the background accompaniment. A line or part need not be a foreground melody. Melodies often consist of one or more musical phrases or motifs, and are usually repeated throughout a composition in various forms. Melodies may also be described by their melodic motion or the pitches or the intervals between pitches, predominantly conjunct or disjunct or with further restrictions, pitch range, tension and release, continuity and coherence, cadence, and shape.


Texture is the way the melodic, rhythmic, and harmonic materials are combined in a composition, therefore determining the overall quality of the sound in a piece. Texture is often described in regard to the density, or thickness, and range, or width between lowest and highest pitches, in relative terms as well as more specifically distinguished according to the number of voices, or parts, and the relationship between these voices. A piece’s texture may be affected by the number and character of parts playing at once, the timbre of the instruments or voices playing these parts and the harmony, tempo, and rhythms used. In music, some common terms for different types of texture are:

  • Monophonic – Monophonic texture includes a single melodic line with no accompaniment.
  • Biphonic – Two distinct lines, the lower sustaining a drone (constant pitch) while the other line creates a more elaborate melody above it.
  • Polyphonic or Counterpoint – Multiple melodic voices which are to a considerable extent independent from or in imitation with one another.
  • Homophonic – The most common texture in Western music: melody and accompaniment. Multiple voices of which one, the melody, stands out prominently and the others form a background of harmonic accompaniment. If all the parts have much the same rhythm, the homophonic texture can also be described as homorhythmic.
  • Homorhythmic – Multiple voices with similar rhythmic material in all parts. Also known as “chordal”. May be considered a condition of homophony or distinguished from it.
  • Heterophonic – Two or more voices simultaneously performing variations of the same melody.
  • Additive – A texture most commonly found in rock music that starts off mono or homophonic, and gradually changes and builds up to polyphonic. This also refers to the volume of a song.


In musical terminology, tempo is the speed or pace of a given piece. Tempo is a crucial element of most musical compositions, as it can affect the mood and difficulty of a piece.The tempo of a piece will typically be written at the start of a piece of music, and in modern Western music is usually indicated in beats per minute. This means that a particular note value is specified as the beat, and the marking indicates that a certain number of these beats must be played per minute. The greater the tempo, the larger the number of beats that must be played in a minute, therefore, the faster a piece must be played.Tempo is as crucial in contemporary music as it is in classical. In electronic dance music, accurate knowledge of a tune’s BPM is important to DJs for the purposes of beatmatching.
Instrumentation refers to the particular combination of musical instruments employed in a composition, and to the properties of those instruments individually. Instrumentation is sometimes used as a synonym for orchestration, which more properly refers to an orchestrator’s,composer’s or arranger’s craft of employing instruments in varying combinations. Writing for any instrument requires a composer or arranger to know the instrument’s properties, such as:

  • the instrument’s particular timbre, or range of timbres
  • the range of pitches available on the instrument, as well as its dynamic range
  • the constraints of playing technique, such as length of breath, possible fingerings, or the average player’s stamina
  • the relative difficulty of particular music on that instrument, meaning repeated notes are much easier to play on the violin than on the piano; while trills are relatively easy on the flute, but extremely difficult on the trombone);
  • the availability of special effects or extended techniques, such as col legno playing, flutter tongue, or glissando;
  • the notation conventions for the instrument.

The two songs that I’ve compared musical components are; Pink Floyd – Comfortably Numb and Neelix – Expect What. Two completely different types of songs. Pink Floyd is Psychedelic Rock and Neelix is Psychedelic Trance. Both have A psychedelic Influence but still different components to each one.

1st Song: Pink Floyd – Comfortably Numb

Arrangement: The instruments are arranged in such a way to create a progressive journey. Starting off slowly with the emotional Pads and slow drums and percussion building up to the loud and moody guitars. The song builds on a 7/8 bar time signature building in energy but a slow melancholic kind of energy, one that keeps the depressive mood throughout the song.

Structure: The structure is based around a 7/8 bar time signature and has a lot of progression throughout building in melancholic energy rising to the main guitar solo. The guitars are structurally well thought creating the feel of being led through this dark journey. The drums are well placed keeping that slow tempo throughout the song but also helping create the progression. The synth sounds and pads are structuraly placed in such a way as to enclose the piece in this emotional atmosphere.

Dynamics: The loudness of the sounds increase ever so progressively giving off that feeling of anticipation of some sort of high energy lead while staying within the melancholic tone.

Tonality: this piece was written in a minor scaled key, giving off that dark emotional atmosphere. the tonality helps to create the mood of the song and the tone of each instrument gives off an indivual dark emotion, helping to contribute to the emotionally progressive piece.

Harmony: The instruments work in perfect harmony creating the beautifully structured melancholic progression. The synth and guitar sounds create a emotionally moving atmosphere leading up to the high energy electric guitar lead. The male vocal works with all the instruments guiding them through the dark emotions they create.

Melody: There is slow depressing melody evoking a lot of emotion while cleverly using different instruments to give off the dark emotional melody, slow high pitched synth pads used to create that airy atmosphere and at the same time the loud melancholic guitar putting emphasis on the emotional journey.

Timbre: There is a dark kind of timbre to this song, with clear vocals and synth pads creating a quality atmosphere within the song. It gives off the impression of a slow journey of melancholic ideas and emotions. Sounds like it was made in a minor scale

Rhythm: Its got a slow progressive rhythm, with high energy leads, adding to the melancholic feel of the song. Beautiful rhythmic guitar leads and synch sounds

Tempo: The tempo is Slow to progressive. Its used to create a sense of intimacy within the piece of music. The tempo of this song along with the other elements create an emotional atmosphere, it gives an idea of importance.

Instrumentation: There is a guitar, Male vocal, Drums, Bass guitar and synthesis within the piece. The use of these instruments in this song allow the piece to breath with the way they’ve all been arranged. they create progression within the piece of music. Its really clever instruments working together to create a melancholic atmosphere

Texture: To me it sounds like a Homophonic texture, by using one main guitar lead surrounded by dark atmospheric pads and slow drums and percussion to add to the feel of progression throughout the piece of music.

2nd Song: Neelix – Expect What

Arrangement: The arrangement is a typical Trance type arrangement and structure. Lots of psychedelic sounding pads and effects to create atmosphere with a driving lead to create direction and energy. Since its progressive trance there is lack of high intense energy but still has that hands up bouncy feel to it.

Structure: The time signature is a simple 4/4 Trance-dance structure, giving that full on bouncy feel. The structure is typical of a trance track; Intro-Break-Middle-Break-End. it has a smooth intro full of pads that leads up to the first break then a middle part of just pads followed by a build up to the main break following off by a closing break for djs to mix into another track.

Dynamics: The dynamics are loud and spacious creating a spacey or large atmosphere. The leads and effects set the boundaries of this space by giving it direction and movement. Very creative use of colour within the track giving it a light happy kind of feel to it and reducing any dark emotion from the sounds, however still having a very emotionally influenced sound.

Tonality: This track to me sounds like it was produced in a minor scael however it gives off the impression of being in major by being so happy and upliftinh. Its a good sense of understanding of tones to produce sounds like this to give off the specific harmonious tone.

Timbre: There is a very light and melodic timbre to this piece of music. You could say the quality of the sounds are very pure and bright, creating the uplifting sort of feel to the music.

Harmony: The Musical instruments and sounds work nicely to create the light harmonic feel to the song. the sounds and pitches are produced through the same key with variations to each instrument giving it that harmonious feel.

Melody: The melody is bright and uplifting creaing a sense of good feeling and happiness. the melody tells you to get up and dance. real uplifting sounds and driving effects create a beautiful trance melody with progressive elements giving off a relaxed party atmosphere.

Rhythm: It has a nice up beat bouncy rhythm to it. The use of drums and leads makes it so, and creates bouncy sort of energy throughout the song

Instrumentation: This is electronic music so synths and midi inputs have been mainly used, including drums, bass synths, lead synths, guitar sounds and hi hat percussions. These have been used in an interesting way to give off the happy kinda bounce to the song.

Tempo: The tempo is set at 140 BPM and that is to create the driving feel of the bass and lead synths. Trance is generally created at speeds ranging from 128 – 190+ BPM. The tempo gives away the high energy bounce that comes from the drums. The tempo of the synths give that uplifting kinda of feel to the track

Texture: I would say this track is also homophonic as there is a main lead synth surrounded by various effects and pads. this creates beautifu texture as all of the instruments work in harmony together to create a rich colour of sounds that create a great sense of euphoria.

Computer Production – Reason Blog

I started out with a simple drum pattern layout using Kong drum designer. I used Kong because I enjoy how you can modulate the parameters of the device to get different sounding drums and kicks to flow more nicely with the mix. creating a crisp punchy kick and i usually find it best to layer the drum patterns including snares and hi hats to get a more organic sound in your drums. Reducing or increasing the drum pitch creates a more punchy attack, but not too high otherwise all the sub will be taken out the kick. my kicks tend to gear towards more full on psychedelic trance kind of feel towards the mix, so the have a shorter decay and a punchier pitch and tone and what i did was layer one more kick to give it more room as i added a reverb on this kick but made the size of the room just big enough in order to feel the “spacyness” of the kick.sdf

To make the bass sound i used the Subtractor synthesizer because it produces a very bass like sound and the parameters of the device is great for tweaking the bass sound to get it to sound the way i wanted it to. Generally a “Psy” bass is made of a Sawtooth wave transposed 1 or 2 octaves down to give it more of a sub feeling. I generally layer my bass with a square waveform to give it a bit more lighter or funky feel to the bass, giving it a more driving groove to it. to build my bass i had a saw and square wave at low octaves, the square a bit higher than the saw. I added a low pass filter and lowered the cutoff frequency to cut away all the high frequencies, getting the fuller deep sound of the bass. Sidechain compression is important in making the kick and bass sit tightly together without any low frequencies overpowering one another. Sidechaining is when you take a frequency from one input and squash it when another sound from a different input hits. Meaning every time the kick hits the compressor will compress the volume of the bass. This helps create that driving bassline feel and eliminates all the messiness.gfsd

I like Reason for only one reason and that’s the plugins they provide you with because you can come up with extremely creative and intricate sounds, however the fact that no third party plugins are allowed, which makes it more difficult learning how to use the provided vsts. I generally enjoy using thor because it has many interesting parameters that can be used to create quality analog or digital sounds. I create my melodies and sound fx using thor and maelstorm, maelstorm produces very atmospheric sounds which i do appreciate using to give the mix more fullness or openness giving the track more breathing room. Using reverbs is a great way of creating airiness in the specific sound. DR octo rex is a great way of chopping up samples of different sounds creating a whole new sound, i adjust the lfo settings usually, LFO means low frequency oscillator, the effect is the sweeping sound you can creating by adding it to the filter, making it move from low frequencies to high frequencies, almost like automation but not as controlled. LFOs are great in creating “wobble” like sounds, examples are in dubstep basslines generally. But lfo on certain sounds and destinations add beautiful effects to the sound sometimes, and could be a great way of creating a unique sounding melody.gfdh

Creating sweeping noises is a great way of suggesting transitions in the mix and a great way of channelling the sounds into certain patterns. These could be white noise rises and sweeps or even pitch modulation sweeps, these sound like rising electricity sounds. A great way of boosting the energy of the mix in a certain area. “Psy” has a lot of those, very intense and dramatic build ups to a simple yet effective drop of bass and kick. song structure is a very sketchy thing that Im still trying to learn and especially with “Psy” as it has minimal or no song structure compared to genres like techno where the structure of the song is not so much free-form and certain sounds or drops need to be done in a certain way to make the track more exciting. With “Psy” it has a basic beginning middle and end, there will be a long intro filled with atmospheric sounds and then the first drop followed by and break down into more atmospheric sounds usually followed by samples of voices talking about life and the the build up and then the drop, this usually happens about 3 times in the space of about 8 minutes, the key is to be extremely creative with the sounds and the placing of them. Ive learnt through just producing this style of music that it works in 1/16 notes on a basic 4×4 beat structure, and the more innovative you are in the 1/16 note space the better the song will sound or it will to seem to be more interesting. I find this genre very fun and creative to make because there is literally no boundaries to what you can come up with, from very slow to extremely fast paced songs.


Automation is a vital element in creating interesting movement within the mix, as it is used to modulate a certain parameter of the synth sound during the mix, for example i automate the filter cutoff of a synth sound to move it from a lower frequency to a higher frequency, creating a rising sweep effect. Mapping the synth to a keyboard controller is a easier way of getting the automation to work more in your favour. you can automate all sorts of parameters including panning of a certain channel, for example the hi hats could be panned left and right by being automated creating that sort of ping pong effect.hgfff

Adding effects is a great way of getting the sound to seem fuller such as using delay, i use alot of delay in my melodic sounds creating a very trance like atmosphere, i think with psytrance in important to create a lot of atmosphere in your mix but also making sure to pay close attention in controlling the atmosphere, so it doesn’t lose control and seem to all over the place. Quantising beats is an interesting way of making the beats sound ordered. Quantising means snapping each note to a certain formation such as a 4×4 or 16th note order, so the note snaps to each 16th note. quantising does however make the sound more robotic and less organic because a human very rarely can hit a note at the exact same time with each hit. Groove templates pretty much do the opposite as quantising and adds a bit more of a groove to the notes making them hit at different times but at the same time making it feel ordered but not robotic.adsfgg

I struggle a lot with sequencing and Im still learning the correct way In sequencing and layering my melodies. In reason you have the edit mode which opens the piano roll of the selected sound. This allows you to sequence the notes in the way you want to, it allows you to adjust the length of the midi clip as well as the velocity of the note. Audio however is unable to be edited like this unless you run the audio through a sequencer tool, this chops up the audio and breaks it down into midi. This is great at times for audio sounds like arpeggiated riff or melodies that you would like to recreate to your own specific taste.


Studio Recording Techniques – Recording Devices

Sound recording and reproduction is an electrical or mechanical inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording.

Acoustic analog recording is achieved by a small microphone diaphragm that can detect changes in atmospheric pressure such as acoustic sound waves and record them as a graphic representation of the sound waves on a medium such as a phonograph in which a stylus senses grooves on a record.

In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, which is then converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it.

Analog sound recording is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Electronically generated sound waves may also be recorded directly from devices such as an electric guitar pick up or a synthesizer, without the use of acoustics in the recording process other than the need for musicians to hear how well they are playing during recording sessions.

Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by a process of digitalisation, allowing it to be stored and transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. Digital recordings are considered higher quality than analog recordings, not necessarily because they have a wider frequency response or dynamic range, but because the digital format can prevent much loss of quality found in analog recording due to noise and electromagnetic interference in playback, and mechanical deterioration or damage to the storage medium. A digital audio signal must be reconverted to analog form during playback before it is applied to a loudspeaker or earphones.

The first device that could record actual sounds as they passed through the air, but could not play them back. The purpose was only visual study and it was called the phonautograph,. The earliest known recordings of the human voice are phonautograph recordings, called “phonautograms”, made in 1857. They consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it.

The next major technical development was the invention of the gramophone disc. Discs were easier to manufacture, transport and store, and they had the additional benefit of being slightly louder than cylinders, which by necessity, were single-sided. Discs were made of shellac or similar brittle plastic-like materials, played with needles made from a variety of materials including mild steel, thorn and even sapphire. Discs had a distinctly limited playing life which was heavily dependent on how they were reproduced.

The earlier, purely acoustic methods of recording had limited sensitivity and frequency range. Mid-frequency range notes could be recorded but very low and very high frequencies could not. Instruments such as the violin transferred poorly to disc; however this was partially solved by retrofitting a conical horn to the sound box of the violin. The horn was no longer required once electrical recording was developed.

The long-playing 3313 rpm microgroove vinyl record, or “LP”, was developed at Columbia Records and introduced in 1948. The short-playing but convenient 7-inch 45 rpm microgroove vinyl single was introduced by RCA victor in 1949. In the US and most developed countries, the two new vinyl formats completely replaced 78 rpm shellac discs by the end of the 1950s. Vinyl was much more expensive than shellac, one of several factors that made its use for 78 rpm records very unusual, but with a long-playing disc the added cost was acceptable and the compact “45” format required very little material. Vinyl offered improved performance, both in stamping and in playback. If played with a good diamond stylus mounted in a lightweight pickup on a well-adjusted tonearm, it was long-lasting.

Electrical recording

Between the invention of the phonograph in 1877 and the advent of digital media, arguably the most important milestone in the history of sound recording was the introduction of what was then called “electrical recording”, in which a microphone was used to convert the sound into an electrical signal that was amplified and used to actuate the recording stylus. This innovation eliminated the “horn sound” resonances characteristic of the acoustical process, produced clearer and more full-bodied recordings by greatly extending the useful range of audio frequencies, and allowed previously unrecordable distant and feeble sounds to be captured.

Magnetic tape

Other important inventions of this period were magnetic tape and the tape recorder. Paper-based tape was first used but was soon replaced by polyester and acetate backing due to dust drop and hiss. Acetate was more brittle than polyester and snapped easily. This technology, the basis for almost all commercial recording from the 1950s to the 1980s, was invented by German audio engineers in the 1930s, who also discovered the technique of AC biasing, which dramatically improved the frequency response of tape recordings. 

Magnetic tape allowed the radio industry for the first time to pre-record many sections of program content such as advertising, which formerly had to be presented live, and it also enabled the creation and duplication of complex, high-fidelity, long-duration recordings of entire programs. Also, for the first time, broadcasters, regulators and other interested parties were able to undertake comprehensive logging of radio broadcasts. Innovations like multitracking and tape echo enabled radio programs and advertisements to be pre-produced to a level of complexity and sophistication that was previously unattainable and the combined impact of these new techniques led to significant changes to the pacing and production style of program content, thanks to the innovations like the endless-loop broadcast cartridge.

Stereo and hi-fi

In 1931 Alan Blumlein, a British electronics engineer working for EMI, designed a way to make the sound of an actor in a film follow his movement across the screen. In December 1931 he submitted a patent including the idea, and in 1933 this became UK patent number 394,325. Over the next two years, Blumlein developed stereo microphones and a stereo disc-cutting head, and recorded a number of short films with stereo soundtracks.

Magnetic tape enabled the development of the first practical commercial sound systems that could record and reproduce high-fidelity stereophonic sound. The experiments with stereo during the 1930s and 1940s were hampered by problems with synchronization. A major breakthrough in practical stereo sound was made by Bell Laboratories, who in 1937 demonstrated a practical system of two-channel stereo, using dual optical sound tracks on film. The first company to release commercial stereophonic tapes is EMI (UK). Their first “Stereosonic” tape was issued in 1954. The rest followed quickly both under the His Master’s Voice and Columbia labels. 161 Stereosonic tapes were released, most of which being classical music or lyric recordings. These tapes were also imported in the USA by RCA.

Most pop singles were mixed into monophonic sound until the mid-1960s, and it was common for major pop releases to be issued in both mono and stereo until the early 1970s. Many Sixties pop albums now available only in stereo were originally intended to be released only in mono, and the so-called “stereo” version of these albums were created by simply separating the two tracks of the master tape. In the mid Sixties, as stereo became more popular, many mono recordings were remastered using the so-called “fake stereo” method, which spread the sound across the stereo field by directing higher-frequency sound into one channel and lower-frequency sounds into the other.

Digital recording

Graphical representation of a sound wave in analog (red) and 4-bit digital (black).

The advent of digital sound recording and later the compact disc in 1982 brought significant improvements in the durability of consumer recordings. The CD initiated another massive wave of change in the consumer music industry, with vinyl records effectively relegated to a small niche market by the mid-1990s. The introduction of digital systems was initially fiercely resisted by the record industry which feared wholesale piracy on a medium which was able to produce perfect copies of original released recordings.

A digital sound recorder from Sony

The most recent and revolutionary developments have been in digital recording, with the development of various uncompressed and compressed digital audio file formats, processors, capable and fast enough to convert the digital data to sound in real time, and inexpensive mass storage. This generated a new type of portable digital audio player. As technologies which increase the amount of data that can be stored on a single medium, such as super Audio CD, DVD-A, Blu-ray Disc and HD DVD become available, longer programs of higher quality fit onto a single disc. Sound files are readily downloaded from the Internet and other sources, and copied onto computers and digital audio players. Digital audio technology is used in all areas of audio, from casual use of music files of moderate quality to the most demanding professional applications. New applications such as internet radio and podcasting have appeared.

Technological developments in recording and editing have transformed the record, movie and television industries in recent decades. Audio editing became practicable with the invention of magnetic tape recording, but digital audio and cheap mass storage allows computers to edit audio files quickly, easily, and cheaply. Today, the process of making a recording is separated into tracking, mixing and mastering. Multitrack recording makes it possible to capture signals from several microphones, or from different ‘takes’ to tape or disc, with maximized headroom and quality, allowing previously unavailable flexibility in the mixing and mastering stages for editing, level balancing, compressing and limiting, adding effects such as reverb, equaliser and flanger, to name a few. There are many different digital audio recording and processing programs running under computer operating systems for all purposes, from professional through serious amateur to casual user.

Digital audio workstation (DAW)

This is an electronic system designed solely or primarily for recording, editing and playing back digital audio. DAWs were originally tape-less, microcompressor-based systems. Modern DAWs are software running on computers with audio interface hardware.

Integrated DAW

An integrated DAW consists of a mixing console, control surface, audio converter, and data storage in one device. Integrated DAWs were more popular before personal computers became powerful enough to run DAW software. As computer power increased and price decreased, the popularity of the costly integrated systems with console automation dropped. Today, some systems still offer computer-less arranging and recording features with a full graphical user interface (GUI).

Software DAW

A computer-based DAW has four basic components: a computer, a sound card, also called a sound converter or audio interface, a digital audio editor software, and at least one input device for adding or modifying musical note data. This could be as simple as a mouse, and as sophisticated as a MIDI controller keyboard, or an automated fader board for mixing track volumes. The computer acts as a host for the sound card and software and provides processing power for audio editing. The sound card or external audio interface typically converts analog audio signals into digital form, and for playback converting digital to analog audio; it may also assist in further processing the audio. The software controls all related hardware components and provides a user interface to allow for recording, editing, and playback. Most computer-based DAWs have extensive MIDI recording, editing, and playback capabilities, and some even have minor video-related features.

As software systems, DAWs could be designed with any user interface, but generally they are based on a multitrack tape recorder metaphor, making it easier for recording engineers and musicians already familiar with using tape recorders to become familiar with the new systems. Therefore, computer-based DAWs tend to have a standard layout which includes transport controls play, rewind, record and more, track controls and/or a mixer, and a waveform display. In single-track DAWs, only one mono or stereo form sound is displayed at a time.

Multitrack DAWs support operations on multiple tracks at once. Like a mixing console, each track typically has controls that allow the user to adjust the overall volume and stereo balance, pan, of the sound on each track. In a traditional recording studio additional processing is physically plugged into the audio signal path. However, a DAW can also route in software or use software plugins or VSTs to process the sound on a track.

Perhaps the most significant feature available from a DAW that is not available in analogue recording is the ability to ‘undo’ a previous action. Undo makes it much easier to avoid accidentally permanently erasing or recording over a previous recording. If a mistake is made, the undo command is used to conveniently revert the changed data to a previous state. Cut, Copy, Paste, and Undo are familiar and common computer commands and usually available in DAWs in some form. More common functions include the modifications of several factors concerning a sound. These include wave shape, pitch, tempo, and filtering.

Commonly DAWs feature some form of automation, often performed through “envelopes”. Envelopes are procedural line segment-based or curve-based interactive graphs. The lines and curves of the automation graph are joined by or comprise adjustable points. By creating and adjusting multiple points along a waveform or control events, the user can specify parameters of the output over time. MIDI recording, editing, and playback is increasingly incorporated into modern DAWs of all types, as is synchronization with other audio and/or video tools.

There are countless plugins for modern day DAW software, each one coming with its own unique functionality. Thus expanding the overall variety of sounds and manipulations that are possible. Some of the functions of these plugins include distortion, resonators, equalizers, synthesizers, compressors, chorus, virtual amp, limiter, phaser, and flangers. Each have a their own form of manipulating the soundwaves, tone, pitch, and speed of a simple sound and transform it into something different. To achieve an even more distinctive sound, multiple plugins can be used in layers, and further automated to manipulate the original sounds and mold it into a completely new sample.

Sound Cards

A sound card, also known as an audio card is an internal computer expansion card that facilitates the input and output of audio signals to and from a computer under control of computer programs. The term sound card is also applied to external audio interfaces that use software to generate sound, as opposed to using hardware inside the PC. Typical uses of sound cards include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation, education and entertainment and video projection.

Most sound cards use a digital-to-analog converter (DAC), which converts recorded or generated digital data into an analog format. The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as an RCA connector. More advanced cards usually include more than one sound chip to support higher data rates and multiple simultaneous functionality, for example digital production of synthesized sounds, usually for real-time generation of music and sound effects using minimal data and CPU time.

Digital sound reproduction is usually done with multichannel DACs, which are capable of simultaneous and digital samples at different pitches and volumes, and application of real-time effects such as filtering or deliberate distortion. Multichannel digital sound playback can also be used for music synthesis, when used with a compliance, and even multiple-channel emulation. Most sound cards have a line in connector for an input signal from a cassette tape or other sound source that has higher voltage levels than a microphone. The sound card digitizes this signal. The DAC transfers the samples to the main memory, from where a recording software may write it to the hard disk for storage, editing, or further processing. Another common external connector is the microphone connector, for signals from a microphone or other low-level input device. Input through a microphone jack can be used, for example, by speech recognition or voice over IP applications.

An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration. Sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration.

For some years, most PC sound cards have had multiple FM synthesis voices, typically 9 or 16, which were usually used for MIDI music. The full capabilities of advanced cards are often not fully used; only one ,mono or two, stereo voice(s) and channel(s) are usually dedicated to playback of digital sound samples, and playing back more than one digital sound sample usually requires a software downmix at a fixed sampling rate. Modern low-cost integrated soundcards such as audio codecs like those meeting the AC’97 standard and even some lower-cost expansion sound cards still work this way. These devices may provide more than two sound output channels, typically 5.1 or 7.1 surround sound, but they usually have no actual hardware polyphony for either sound effects or MIDI reproduction – these tasks are performed entirely in software.

Since digital sound playback has become available and provided better performance than synthesis, modern soundcards with hardware polyphony do not actually use DACs with as many channels as voices; instead, they perform voice mixing and effects processing in hardware, eventually performing digital filtering and conversions to and from the frequency domain for applying certain effects, inside a dedicated DSP. The final playback stage is performed by an external DAC with significantly fewer channels than voices.

Professional soundcards are special soundcards optimized for low-latency multichannel sound recording and playback, including studio-grade fidelity. Their drivers usually follow the Audio Stream Input Output, referred to as ASIO, protocol for use with professional sound engineering and music software, although ASIO drivers are also available for a range of consumer-grade soundcards.

Professional soundcards are usually described as “audio interfaces”, and sometimes have the form of external rack-mountable units using USB, FireWire, or an optical interface, to offer sufficient data rates. The emphasis in these products is, in general, on multiple input and output connectors, direct hardware support for multiple input and output sound channels, as well as higher sampling rates and fidelity as compared to the usual consumer soundcard. In that respect, their role and intended purpose is more similar to a specialized multi-channel data recorder and real-time audio mixer and processor, roles which are possible only to a limited degree with typical consumer soundcards.

In general, consumer grade soundcards impose several restrictions and inconveniences that would be unacceptable to an audio professional. One of a modern soundcard’s purposes is to provide an AD/DA converter analog to digital/digital to analog. However, in professional applications, there is usually a need for enhanced recording, analog to digital conversion capabilities. One of the limitations of consumer soundcards is their comparatively large sampling latency; this is the time it takes for the AD Converter to complete conversion of a sound sample and transfer it to the computer’s main memory.

USB sound cards

USB sound “cards”, sometimes called “audio interfaces”, are usually external boxes that plug into the computer via USB. A USB audio interface may describe a device allowing a computer which has a sound-card, yet lacks a standard audio socket, to be connected to an external device which requires such a socket, via its USB socket.

The USB specification defines a standard interface, the USB audio device class, allowing a single driver to work with the various USB sound devices and interfaces on the market. Even cards meeting the older, slow, USB 1.1 specification are capable of high quality sound with a limited number of channels, or limited sampling frequency or bit depth, but USB 2.0 or later is more capable.

The main function of a sound card is to play audio, usually music, with varying formats; monophonic, stereophonic, various multiple speaker setups and degrees of control. The source may be a CD or DVD, a file, streamed audio, or any external source connected to a sound card input. Audio may be recorded. Sometimes sound card hardware and drivers do not support recording a source that is being played. A card can also be used, in conjunction with software, to generate arbitrary waveforms, acting as an audio-frequency function generator.

A card can be used, again in conjunction with free or commercial software, to analyse input waveforms. For example, a very-low-distortion sinewave oscillator can be used as input to equipment under test; the output is sent to a sound card’s line input and run through Fourier transform software to find the amplitude of each harmonic of the added distortion. Alternatively, a less pure signal source may be used, with circuitry to subtract the input from the output, attenuated and phase-corrected; the result is distortion and noise only, which can be analysed. There are programs which allow a sound card to be used as an audio-frequency oscilloscope.