Thursday, 12 February 2015

Surprising Success With The Waldorf

I'll be honest, today I tried and failed to synthesis Well Tempered Clavier part 2. But, as a consolation prise I got a lovely deep 'cello'.


Now, instruments of the violin family cannot be synthesised in any realistic way; they are just too complex (if someone does it - they are probably using samples directly - convolution with a sample also works). However, one can (I believe) capture their essence using first principles synthesis. The key ingredients are:

  1. Body resonance - if you think you have too much - you probably don't.
  2. Pitch variation due to bowing (as the string in stretched by the bow its pitch varies).
  3. A shockingly rattle at the top end which somehow works out because it is unstable.
  4. A shimmering stereo field cause but the sound being sent in all directions by the body of the instrument.
  5. A strong variation of the notes as they move in and out of resonance with the body.
  6. A very carefully tailored envelope.
  7. Vibrato and tremolo which is not just slapped on but moves and is subtle.

This is the Sonic Field blog so which of these came from the Waldorf Pulse 2 and which from Sonic Field?

2 and 3 were definitely aided by Sonic Field. The reverb had a chunk of excitation to it which built on top of chorusing. This along with the alive nature of the synth (being analogue) helped create this piece. It is not a cello but people tell me it sounds nice :)


Tuesday, 10 February 2015

Waldorf Pulse 2: A Bit Of Random - A Lot Of Fun

Right now I am completely loving working with the Waldorf - the harpsichord sound it can make is beyond belief. 

A regular old fashioned ring modulated synth-harpsichord is all very well; but the pulse width modulated effect which is possible with the Waldorf is something else again. The raw sounds from the little synth is a bit rough and a bit electronic but when passed through a touch of harmonic excitation and reverb in Sonic Field the result is stunning. Now, I might be blowing my own trumpet, but I can honestly say everyone who has listened to this live has praised the sound:


To be completely honest, I was a bit lucky. I just tried adding a bit of spring reverberation (using a spring impulse response) and I think that was the final trick to make the sound come to life. The reverb' you can here is a mixture of a few room/hall impulse responses reverbs mixed with a bit of spring.

But, the true secret is the way the signal path of a true analogue synth works. The sounds are all coupled and constantly changing. An electronic audio circuit 'wants' to make audio because the values of the components are set up that way. The circuits in an analogue synth' then interact with one another in music ways. This is distinctly different from pure digital synthesis where nice sounding audio is something one has to force from the algorithms. I am enjoying the mix where the analogue makes amazing feed stuff for digital post processing.

Sunday, 8 February 2015

Spatial Chorus

I have been working on the chorus effect.

By introducing a new operator into Sonic Field, it has been possible to produce a very stable chorus effect. One could consider chorus to be an FM effect. However, the numeric stability of FM is very poor for sampled sounds (or so I have found). What I have come up with instead is a time shift. Rather than altering the sample rate based on a signal, I shift the point of samples. Thus, minute errors do not accumulate as they do in FM.

The pitch shift then becomes the first differential of the time shift. In chorusing I make the time shift a sine wave to the pitch shift if also a sine wave (or a cosine if you want to be pedantic).

Here is the new operator:


/* For Copyright and License see LICENSE.txt and COPYING.txt in the root directory */
package com.nerdscentral.audio.time;

import java.util.List;

import com.nerdscentral.audio.SFConstants;
import com.nerdscentral.audio.SFSignal;
import com.nerdscentral.sython.Caster;
import com.nerdscentral.sython.SFPL_Context;
import com.nerdscentral.sython.SFPL_Operator;
import com.nerdscentral.sython.SFPL_RuntimeException;

public class SF_TimeShift implements SFPL_Operator
{

    /**
     * 
     */
    private static final long serialVersionUID = 1L;

    @Override
    public Object Interpret(final Object input, final SFPL_Context context) throws SFPL_RuntimeException
    {
        List<Object> lin = Caster.makeBunch(input);
        try (SFSignal in = Caster.makeSFSignal(lin.get(0)); SFSignal shift = Caster.makeSFSignal(lin.get(1)))
        {
            try (SFSignal y = in.replicateEmpty())
            {

                int length = y.getLength();
                if (shift.getLength() < length) length = shift.getLength();
                for (int index = 0; index < length; ++index)
                {
                    double pos = index + SFConstants.SAMPLE_RATE_MS * shift.getSample(index);
                    y.setSample(index, in.getSampleCubic(pos));
                }
                length = y.getLength();
                for (int index = shift.getLength(); index < length; ++length)
                {
                    y.setSample(index, in.getSample(index));
                }
                return Caster.prep4Ret(y);
            }
        }
    }

    @Override
    public String Word()
    {
        return Messages.getString("SF_TimeShift.0"); //$NON-NLS-1$
    }

}


Here is a piece demonstrating the effect:

And here is a chorus patch which was used:


sf.SetSampleRate(96000)
if not 'random' in dir():
    import random
    random.seed(System.currentTimeMillis())

def bandRand(min,max):
    min=float(min)
    max=float(max)
    r1=random.random()
    r2=random.random()
    r=float(r1+r2)*0.5
    r=r*(max-min)
    r=r+min
    return r

def chorus(
    left,
    right,
    minDepth = 10.0,
    maxDepth = 50.0,
    maxRate  =  0.1,
    minRate  =  0.05,
    nChorus  =  4.0,
    minVol   =  0.7,
    maxVol   =  1.0):    
    def inner(signal_):
        def inner_():
            signal=sf.Clean(signal_)
            sigs=[]
            l=sf.Length(+signal)
            for inst in range(0,nChorus):
                def in_inner():
                    print "Do"
                    lfo=sf.PhasedSineWave(l,bandRand(minRate,maxRate),random.random())
                    lfo=sf.NumericVolume(lfo,bandRand(minDepth,maxDepth))
                    nsg=sf.TimeShift(+signal,lfo)
                    lfo=sf.PhasedSineWave(l,bandRand(minRate,maxRate),random.random())
                    lfo=sf.NumericVolume(lfo,bandRand(minVol,maxVol))
                    lfo=sf.DirectMix(1,lfo)
                    nsg=sf.Multiply(lfo,nsg)
                    print "Done"
                    return sf.Finalise(nsg)
                sigs.append(sf_do(in_inner))
            ret=sf.Finalise(sf.Mix(sigs))
            -signal
            return ret
        return sf_do(inner_)
    
    return inner(left),inner(right)
    
(left,right)=sf.ReadFile("temp/a.wav")
left,right=chorus(
    left,
    right,
    minDepth =  0.0,
    maxDepth = 10.0,
    minVol   =  1.0,
    maxVol   =  1.0,
    nChorus  =  9.0)

left,right=chorus(left,right)
sf.WriteFile32((left,right),"temp/c.wav")

Tuesday, 3 February 2015

Working With An Analogue SynthThe

The Waldorf Pulse 2 analogue synthesiser

Most of my work in Sonic Field has been using the built in synth abilities of the program. But there is not reason it should to drive an external synth and post process the signal.

I recently bought a Pulse 2 and it is quite amazing. However, it is also a mono synth. I am completely spoilt generating sounds with Sonic Field as it has no upper limit to the number of notes which can be generated at once. Whilst the mono synth sounds has its place, it is also rather limited. So, I needed a solution to give multi-tracking.

The existing midi implementation in SF was just pathetic. I completely ripped it out and pretty much started over. The only piece remaining is the code which maps midi on/off messages into notes and disambiguated overlapping messages on the same track/channel/key combination.

I should go into great detail about how it all works, but I am exhausted after a long day working and evening making music so here is the dump of the patch I used to drive the synthesiser over midi. Yes - I drove the synth from Sonic Field directly!



from com.nerdscentral.audio.midi import MidiFunctions

class Midi(MidiFunctions):
    metaTypes={
            0x00:'SequenceNumber',
            0x01:'text',
            0x02:'copyright',
            0x03:'track_name',
            0x04:'instrument',
            0x05:'lyrics',
            0x06:'marker',
            0x07:'cue',
            0x20:'channel',
            0x2F:'end',
            0x51:'tempo',
            0x54:'smpte_offset',
            0x58:'time_signature',
            0x59:'key_signature',
            0x7f:'sequencer_specific'
        }
        
    timeTypes={
        0.0:  'PPQ',
        24.0: 'SMPTE_24',
        25.0: 'SMPTE_25',
        29.97:'SMPTE_30DROP',
        30.0: 'SMPTE_30'
    }
     
    @staticmethod
    def timeType(sequence):
        return Midi.timeTypes[sequence.getDivisionType()]

    @staticmethod
    def isNote(event):
        return event['command']=='note'

    @staticmethod
    def isMeta(event):
        return event['command']=='meta'

    @staticmethod
    def isCommand(event):
        return event['command']=='command'
        
    @staticmethod
    def isTempo(event):
        Midi.checkMeta(event)
        return event['type']==0x51

    @staticmethod
    def isTimeSignature(event):
        Midi.checkMeta(event)
        return event['type']==0x58

    @staticmethod
    def metaType(event):
        t=event['type']
        if t in Midi.metaTypes:
            return Midi.metaTypes[t]
        return 'unknown'

    @staticmethod
    def checkMeta(event):
        if not event['command']=='meta':
            raise Exception('Not meta message')

    @staticmethod
    def tempo(event):
        Midi.checkMeta(event)
        if event['type']!=0x51:
            raise Exception('not tempo message')
        data=event['data']
        if len(data)==0:
            raise Exception('no data')
        t=0
        for i in range(0,len(data)):
            if not i==0:
                t <<= 8
            t+=data[i]
        return t

    @staticmethod
    def timeSignature(event):
        Midi.checkMeta(event)
        if event['type']!=0x58:
            raise Exception('not tempo message')
        data=event['data']
        if not len(data)==4:
            raise Exception('wrong data')
        return {
            'numerator'  :data[0],
            'denominator':2**data[1],
            'metronome'  :data[2],
            '32nds/beat' :data[3]
        }
        
    @staticmethod
    def tickLength(denominator,microPerQuater,sequence):
        # if denom = 4 then 1 beat per quater note
        # if denom = 8 then 2 beats per quater note
        # there fore beats per quater note= denom/4
        beatsPerQuaterNote = denominator/4.0
        ticksPerBeat       = float(sequence.getResolution())
        microsPerBeat      = float(microPerQuater)/beatsPerQuaterNote
        return microsPerBeat/float(ticksPerBeat)

sequence=Midi.readMidiFile("temp/passac.mid")

print 'Sequence Time  Type:', Midi.timeType(sequence)
print 'Sequence Resolution:', sequence.getResolution()
print 'Initial tick length:',Midi.tickLength(4,500000,sequence)
otl=Midi.tickLength(4,500000,sequence)

midis=Midi.processSequence(sequence)

sout=Midi.blankSequence(sequence)

# Create the timing information track
tout=sout.createTrack()
for event in midis[0]:
    if Midi.isMeta(event):
        if Midi.isTempo(event) or Midi.isTimeSignature(event):
            tout.add(event['event'])

tout1=sout.createTrack()
tout2=sout.createTrack()
midi1=[]
midi2=[]
flip=True
minKey=999
maxKey=0

# Use 499 for 1 Done
# Use 496 for 2
# Use 497 for 3
# Use 497 for 4
# Use 001 for 5 Done
# Use 002 for 6
midiNo=6

for event in midis[midiNo]:
    if Midi.isNote(event):
        ev1=event['event']
        ev2=event['event-off']
        if event['key']>maxKey:
            maxKey=event['key']
        if event['key']<minKey:
            minKey=event['key']

for event in midis[midiNo]:
    if Midi.isNote(event):
        ev1=event['event']
        ev2=event['event-off']
        ev1.setTick(ev1.getTick()+600)
        ev2.setTick(ev2.getTick()+600)
        key=event['key']
        pan=127.0*float(key-minKey)/float(maxKey-minKey)
        pan=31+pan/2
        pan=int(pan)
        pan=Midi.makePan(1,ev1.getTick()-1,pan)
        if flip:
            midi1.append(pan)
            midi1.append(event['event'])
            midi1.append(event['event-off'])
            flip=False
        else:
            midi2.append(pan)
            midi2.append(event['event'])
            midi2.append(event['event-off'])
            flip=True

Midi.addPan(tout1,1,100,64)
Midi.addPan(tout2,2,100,64)

Midi.addNote(tout1,1,100,120,50,100)
Midi.addNote(tout2,2,100,120,50,100)
        
midi1=sorted(midi1,key=lambda event: event.getTick())
midi2=sorted(midi2,key=lambda event: event.getTick())

for event in midi1:
    Midi.setChannel(event,1)
    tout1.add(event)
#for event in midi2:
#    Midi.setChannel(event,2)
#    tout2.add(event)

Midi.writeMidiFile("temp/temp.midi",sout)

for dev in Midi.getMidiDeviceNames():
    print dev

player=Midi.getPlayer(3,2)
player.manual(sout)
player.waitFor()

And here is the post processing patch. I took each separately recorded voice from the synth and mixed them together in Audacity using the note I injected at a known point at the start of each to line them up. Once the mix sounded OK, I post processed with this patch:

def reverbInner(signal,convol,grainLength):
    def rii():
        mag=sf.Magnitude(+signal)
        if mag>0:
            signal_=sf.Concatenate(signal,sf.Silence(grainLength))
            signal_=sf.FrequencyDomain(signal_)
            signal_=sf.CrossMultiply(convol,signal_)
            signal_=sf.TimeDomain(signal_)
            newMag=sf.Magnitude(+signal_)
            if newMag>0:
                signal_=sf.NumericVolume(signal_,mag/newMag)        
                # tail out clicks due to amplitude at end of signal 
                return sf.Realise(signal_)
            else:
                return sf.Silence(sf.Length(signal_))
        else:
            -convol
            return signal
    return sf_do(rii)
            
def reverberate(signal,convol):
    def revi():
        grainLength = sf.Length(+convol)
        convol_=sf.FrequencyDomain(sf.Concatenate(convol,sf.Silence(grainLength)))
        signal_=sf.Concatenate(signal,sf.Silence(grainLength))
        out=[]
        for grain in sf.Granulate(signal_,grainLength):
            (signal_i,at)=grain
            out.append((reverbInner(signal_i,+convol_,grainLength),at))
        -convol_
        return sf.Clean(sf.FixSize(sf.MixAt(out)))
    return sf_do(revi)

def excite(sig_,mix,power):
    def exciteInner():
        sig=sig_
        m=sf.Magnitude(+sig)
        sigh=sf.BesselHighPass(+sig,500,2)
        mh=sf.Magnitude(+sigh)
        sigh=sf.Power(sigh,power)
        sigh=sf.Clean(sigh)
        sigh=sf.BesselHighPass(sigh,1000,2)
        nh=sf.Magnitude(+sigh)
        sigh=sf.NumericVolume(sigh,mh/nh)
        sig=sf.Mix(sf.NumericVolume(sigh,mix),sf.NumericVolume(sig,1.0-mix))
        n=sf.Magnitude(+sig)
        return sf.Realise(sf.NumericVolume(sig,m/n))
    return sf_do(exciteInner)

####################################
#
# Load the file and clean
#
####################################

(left,right)=sf.ReadFile("temp/pulse-passa-2.wav")

left =sf.Multiply(sf.NumericShape((0,0),(64,1),(sf.Length(+left ),1)),left )
right=sf.Multiply(sf.NumericShape((0,0),(64,1),(sf.Length(+right),1)),right)

left =sf.Concatenate(sf.Silence(1024),left)
right=sf.Concatenate(sf.Silence(1024),right)


####################################
#
# Room Size And Nature Controls
#
####################################

bright  = True
vBright = False
church  = False
ambient = False
post    = True
spring  = False
bboost  = False
  
if ambient:  
    (convoll,convolr)=sf.ReadFile("temp/v-grand-l.wav")
    (convorl,convorr)=sf.ReadFile("temp/v-grand-r.wav")
elif church:    
    (convoll,convolr)=sf.ReadFile("temp/bh-l.wav")
    (convorl,convorr)=sf.ReadFile("temp/bh-r.wav")
else:
    (convoll,convolr)=sf.ReadFile("temp/Vocal-Chamber-L.wav")
    (convorl,convorr)=sf.ReadFile("temp/Vocal-Chamber-R.wav")

if spring:
    spring=sf.ReadFile("temp/classic-fs2a.wav")[0]
    convoll=sf.Mix(
        convoll,
        +spring
    )
    
    convorr=sf.Mix(
        convorr,
        sf.Invert(spring)
    )

if bboost:
    left =sf.RBJLowShelf(left,256,1,6)
    right=sf.RBJLowShelf(right,256,1,6)
    
convoll=excite(convoll,0.75,2.0)
convolr=excite(convolr,0.75,2.0)
convorl=excite(convorl,0.75,2.0)
convorr=excite(convorr,0.75,2.0)

ll  = reverberate(+left ,convoll)
lr  = reverberate(+left ,convolr)
rl  = reverberate(+right,convorl)
rr  = reverberate(+right,convorr)
wleft =sf.FixSize(sf.Mix(ll,rl))
wright=sf.FixSize(sf.Mix(rr,lr))

wright = excite(wright,0.15,1.11)
wleft  = excite(wleft ,0.15,1.11)

if bright:
    right  = excite(right,0.15,1.05)
    left   = excite(left ,0.15,1.05)
if vBright:
    right  = excite(right,0.25,1.15)
    left   = excite(left ,0.25,1.15)

sf.WriteFile32((sf.FixSize(+wleft),sf.FixSize(+wright)),"temp/wet.wav")

wleft =sf.FixSize(sf.Mix(sf.Pcnt15(+left),sf.Pcnt85(wleft)))
wright =sf.FixSize(sf.Mix(sf.Pcnt15(+right),sf.Pcnt85(wright)))

sf.WriteFile32((+wleft,+wright),"temp/mix.wav")

if ambient:
    (convoll,convolr)=sf.ReadFile("temp/ultra-l.wav")
    (convorl,convorr)=sf.ReadFile("temp/ultra-r.wav")
elif church:
    (convoll,convolr)=sf.ReadFile("temp/v-grand-l.wav")
    (convorl,convorr)=sf.ReadFile("temp/v-grand-r.wav")
else:
    (convoll,convolr)=sf.ReadFile("temp/bh-l.wav")
    (convorl,convorr)=sf.ReadFile("temp/bh-r.wav")

left  = sf.BesselLowPass(left  ,392,1)
right = sf.BesselLowPass(right,392,1)
ll  = reverberate(+left ,convoll)
lr  = reverberate( left ,convolr)
rl  = reverberate(+right,convorl)
rr  = reverberate( right,convorr)
vwleft =sf.FixSize(sf.Mix(ll,rl))
vwright=sf.FixSize(sf.Mix(rr,lr))
sf.WriteFile32((sf.FixSize(+vwleft),sf.FixSize(+vwright)),"temp/vwet.wav")
wleft =sf.FixSize(sf.Mix(wleft ,sf.Pcnt20(vwleft )))
wright=sf.FixSize(sf.Mix(wright,sf.Pcnt20(vwright)))
sf.WriteSignal(+wleft ,"temp/grand-l.sig")
sf.WriteSignal(+wright,"temp/grand-r.sig")
wleft  = sf.Normalise(wleft)
wright = sf.Normalise(wright)
sf.WriteFile32((wleft,wright),"temp/grand.wav")

if post:
    print "Warming"
    
    left  = sf.ReadSignal("temp/grand-l.sig")
    right = sf.ReadSignal("temp/grand-r.sig")
    
    def highDamp(sig,freq,fact):
        hfq=sf.BesselHighPass(+sig,freq,4)
        ctr=sf.FixSize(sf.Follow(sf.FixSize(+hfq),0.25,0.5))
        ctr=sf.Clean(ctr)
        ctr=sf.RBJLowPass(ctr,8,1)
        ctr=sf.DirectMix(
            1,
            sf.NumericVolume(
                sf.FixSize(sf.Invert(ctr)),
                fact
            )
        )
        hfq=sf.Multiply(hfq,ctr)
        return sf.Mix(hfq,sf.BesselLowPass(sig,freq,4))
    
    def filter(sig_):
        def filterInner():
            sig=sig_
            q=0.5
            sig=sf.Mix(
                sf.Pcnt10(sf.FixSize(sf.WaveShaper(-0.03*q,0.2*q,0,-1.0*q,0.2*q,2.0*q,+sig))),
                sig
            )
            sig=sf.RBJPeaking(sig,64,2,2)
            damp=sf.BesselLowPass(+sig,2000,1)
            sig=sf.FixSize(sf.Mix(damp,sig))
            low=sf.BesselLowPass(+sig,256,4)
            m1=sf.Magnitude(+low)
            low=sf.FixSize(low)
            low=sf.Saturate(low)
            m2=sf.Magnitude(+low)
            low=sf.NumericVolume(low,m1/m2)
            sig=sf.BesselHighPass(sig,256,4)
            sig=sf.Mix(low,sig)
            sig=highDamp(sig,5000,0.66)
            return sf.FixSize(sf.Clean(sig))
        return sf_do(filterInner)
    
    left  = filter(left)
    right = filter(right)
    sf.WriteFile32((left,right),"temp/proc.wav")


Monday, 19 January 2015

Ion Drive

I have started a completely new project - Space Craft Sounds!

Synthesising Bach has been a huge learning curve and very enjoyable indeed, what could possibly follow such a musical experience? Something completely non musical. Music makes an amazing background to work, concentration and relaxation; nevertheless, other sounds can be very effective and maybe less tiring. I remember the sound of a small stream outside on of the houses I lived in as  child was very relaxing. A few months ago, my wife and I were travelling across Belgium and I noticed how the throb of the Mercedes V8 and low rumble of the road noise helped her sleep over the motorway sections of our journey.

I have taken these ideas and scaled it up quite a bit! 'Ion Drive' is a super sized version of ambient car/road noise. With a nod to my faithful old Mercedes here is the 'back story':

"
Ion Drive:

At full power, slight fluctuations in the magnetic ion acceleration coils cause slow, ever changing throbbing to emerge from each bank of huge 1.6 Terra Watt engines. The battle rages outside in the silence of space as this flag ship ‘Europa’ yields burst after burst of withering anti-neutron fire on the retreating force. At these speeds, pulling against Epsilon Major’s gravity, the engines constantly boil off helium coolant which whistles on its super sonic journey to the refrigerator plant. Built on Epsilon IV with engines by Daimler Benz Space Ag and neutron cannons by Advanced Particle Weapons Inc, her prey knows the fight is already lost, no Thor class heavy cruiser has ever been defeated in battle.
"

So, how was it done? How to make a never repeating sound which whistles and rumbles? The key to the whole effect is in makeEngine. I guess I started off thinking of a regular car engine hence the parameter 'rpm' which give a bass frequency to work with. As I fiddled with ideas (this was an evolution not a design) my imagination got carried away; starting with a few hundred horse power to a few thousand million!

So, the characteristic rumble sound is created not directly from a sine wave but from resonance with the upper harmonics of a distorted sine wave.

        sig=sf.SineWave(length,pitch*0.1+random()*0.05)
        mod=sf.SineWave(length,0.1+random()*0.05)
        mod=sf.DirectMix(1.0,sf.Pcnt50(mod))
        sig=sf.Multiply(
            sig,
            mod
        )
        sig=sf.Power(sig,10)
        sig=sf.RBJPeaking(sig,pitch,1,99)
        sig=sf.RBJPeaking(sig,pitch,1,99)

I take a sine wave and ring modulate it with a very low frequency sine wave and then distort the result using the Power function. This produces an modulated set of harmonics. The two RBJPeaking filters then resonate at around 10 times the frequency of the initial sine wave. This will produce an unstable set of ringing sounds at around the resonance frequency. An approach like this produces a much more real world, changing and unstable sound than trying to directly create the rumble.

However, this still was not unstable enough. When it comes to engine noise, is seems the more complex the sound the more believable it is; after all, engines are made of a lot of parts, pipe, pannels etc. All these individual things add together to make the real sound and if we do not add enough complexity to the synthesis, it will sound thin and unreal.

So, the next trick was to frequency modulate by a random but very slow signal. This is the thing which give the sound its unstable longitudinal nature.

        sig=sf.SineWave(length,pitch*0.1+random()*0.05)
        mod=sf.WhiteNoise(sLen)
        mod=sf.RBJLowPass(mod,8,1.0)
        mod=sf.RBJLowPass(mod,8,1.0)
        mod=sf.RBJLowPass(mod,8,1.0)
        mod=sf.DirectRelength(mod,0.01)
        mod=sf.Finalise(mod)
        mod=sf.Pcnt50(mod)
        mod=sf.DirectMix(1.0,mod)
        mod=sf.Pcnt49(mod)
        mod=sf.Cut(0,sf.Length(+sig),mod)
        print sf.Length(+mod),sf.Length(+sig)
        sig=sf.FrequencyModulate(sig,mod)

The rest of the patch is a combination of excitation and convolution reverb. Here is the complete thing.

sf.SetSampleRate(64000)
from random import random

def excite(sig_,mix,power):
    def exciteInner():
        sig=sig_
        m=sf.Magnitude(+sig)
        sigh=sf.BesselHighPass(+sig,500,2)
        mh=sf.Magnitude(+sigh)
        sigh=sf.Power(sigh,power)
        sigh=sf.Clean(sigh)
        sigh=sf.BesselHighPass(sigh,1000,2)
        nh=sf.Magnitude(+sigh)
        sigh=sf.NumericVolume(sigh,mh/nh)
        sig=sf.Mix(sf.NumericVolume(sigh,mix),sf.NumericVolume(sig,1.0-mix))
        n=sf.Magnitude(+sig)
        return sf.Realise(sf.NumericVolume(sig,m/n))
    return sf_do(exciteInner)

def reverbInner(signal,convol,grainLength):
    def rii():
        mag=sf.Magnitude(+signal)
        if mag>0:
            signal_=sf.Concatenate(signal,sf.Silence(grainLength))
            signal_=sf.FrequencyDomain(signal_)
            signal_=sf.CrossMultiply(convol,signal_)
            signal_=sf.TimeDomain(signal_)
            newMag=sf.Magnitude(+signal_)
            if newMag>0:
                signal_=sf.NumericVolume(signal_,mag/newMag)        
                # tail out clicks due to amplitude at end of signal 
                return sf.Realise(signal_)
            else:
                return sf.Silence(sf.Length(signal_))
        else:
            -convol
            return signal
    return sf_do(rii)
            
def reverberate(signal,convol):
    def revi():
        grainLength = sf.Length(+convol)
        convol_=sf.FrequencyDomain(sf.Concatenate(convol,sf.Silence(grainLength)))
        signal_=sf.Concatenate(signal,sf.Silence(grainLength))
        out=[]
        for grain in sf.Granulate(signal_,grainLength):
            (signal_i,at)=grain
            out.append((reverbInner(signal_i,+convol_,grainLength),at))
        -convol_
        return sf.Clean(sf.FixSize(sf.MixAt(out)))
    return sf_do(revi)

def makeEngine(length_,rpm):
    def inner():
        pitch=2.0*float(rpm)/60.0
        length=float(length_)
        sig=sf.SineWave(length,pitch*0.1+random()*0.05)
        mod=sf.SineWave(length,0.1+random()*0.05)
        mod=sf.DirectMix(1.0,sf.Pcnt50(mod))
        sig=sf.Multiply(
            sig,
            mod
        )
        sig=sf.Power(sig,10)
        sig=sf.RBJPeaking(sig,pitch,1,99)
        sig=sf.RBJPeaking(sig,pitch,1,99)
        sig=sf.Finalise(sig)
        noise=sf.WhiteNoise(length)
        noise=sf.Power(noise,5)
        noise=sf.FixSize(noise)
        noise=sf.ButterworthLowPass(noise,32,2)
        noise=sf.Finalise(noise)
        sig=sf.Mix(
            sf.Pcnt98(sig),
            sf.Pcnt2(noise)
        )
        sig=sf.Finalise(sig)
        sig2=sf.RBJPeaking(+sig,pitch*32,4,99)
        sig=sf.Mix(
            sf.Pcnt10(sf.FixSize(sig2)),
            sig
        )
        sig=sf.Cut(1,sf.Length(+sig)-1,sig)       
        sLen=sf.Length(+sig)*0.011
        mod=sf.WhiteNoise(sLen)
        mod=sf.RBJLowPass(mod,8,1.0)
        mod=sf.RBJLowPass(mod,8,1.0)
        mod=sf.RBJLowPass(mod,8,1.0)
        mod=sf.DirectRelength(mod,0.01)
        mod=sf.Finalise(mod)
        mod=sf.Pcnt50(mod)
        mod=sf.DirectMix(1.0,mod)
        mod=sf.Pcnt49(mod)
        mod=sf.Cut(0,sf.Length(+sig),mod)
        print sf.Length(+mod),sf.Length(+sig)
        sig=sf.FrequencyModulate(sig,mod)
        return sf.Realise(sig)
    return sf_do(inner)

length =  60*60000
chans  =        16
rpm    =      2200
sigs  = []
for x in range(0,chans):
    l=float(x)/float(chans)
    r=1.0-l
    sig=makeEngine(length,rpm)
    sigs.append(((l,r),sig))

def mix(sigs,pos,keep=True):
    def inner():
        toMix=[]
        for lr,sig in sigs:
            v=lr[pos]
            if keep:
                +sig
            p=30.0*v
            toMix.append((sf.NumericVolume(sig,v),p))
        sig=sf.Realise(sf.Finalise(sf.MixAt(toMix)))
        sig=sf.Power(sig,1.1)
        sig=sf.Cut(1,sf.Length(+sig)-1,sig)
        return sf.Finalise(sig)
    return sf_do(inner)

left  = mix(sigs,0)
right = mix(sigs,1,False)

print "Entering reverb"
(convoll,convolr)=sf.ReadFile("temp/bh-l.wav")
(convorl,convorr)=sf.ReadFile("temp/bh-r.wav")

convoll=excite(convoll,0.75,2.0)
convolr=excite(convolr,0.75,2.0)
convorl=excite(convorl,0.75,2.0)
convorr=excite(convorr,0.75,2.0)

ll  = reverberate(+left ,convoll)
lr  = reverberate(+left ,convolr)
rl  = reverberate(+right,convorl)
rr  = reverberate(+right,convorr)

wleft =sf.FixSize(sf.Mix(ll,rl))
wright=sf.FixSize(sf.Mix(rr,lr))

wright = excite(wright,0.15,1.11)
wleft  = excite(wleft ,0.15,1.11)

right  = excite(right,0.15,1.05)
left   = excite(left ,0.15,1.05)

wleft =sf.FixSize(sf.Mix(sf.Pcnt15(left),sf.Pcnt85(wleft)))
wright =sf.FixSize(sf.Mix(sf.Pcnt15(right),sf.Pcnt85(wright)))

sf.WriteFile32((wleft,wright),"temp/mix.wav")


Tuesday, 30 December 2014

Exploring Voicing

The opening to the 'Romantic' version.

Voicing is the act of matching sounds to music. Not to be confused with arrangement.

In an arrangement we choose which instruments will play which notes and often some notes are added, accenting changed and tempo tuned. Arrangement is a complex and beautiful art but it leads to different pieces of music with a common root. Voicing is the the choice of sounds to match to exactly the same piece of music. Its use is most obvious with organs where the music is divided into voices already; then one can choose which manual to play each voice on and then which stops to apply to each manual.

I love Bach's Passacaglia and Fugue in C minor. I find it an amazingly powerful and ensnaring piece. If there is a upper limit to the number of times I can listen to it, I have not found that limit yet. Anyhow, my favourite style for this piece is MASSIVE. I a huge bass, so loud it is almost a fog-horn should blow you feet from under you and the finale must break your speaker cones. I love the power this piece can take; it there an upper limit?

Oh, but, Bach regretted never having plaid an organ with good reeds; in which case the Passacaglia was not written to be plaid with such power? What secrets does it hold which as hidden by a massive performance? Can it be successfully recreated in a gentle or romantic form?

Rather than rearrange the music, I have attempted to answer this question using just voicing. I have created three absolutely identical performances. Well, they are identical in everything but voicing. Each is in Werckmeister III temperament. Each is contains absolutely the same notes in length and position (computers are good at that sort of thing). Each is generated in the same space (a mathematically created small cathedral and/or large church). My voicings:

Massive: 
  • Reed in the lead - a trumpet like sound with majesty. I call in (incorrectly) a clarion.
  • Strings (string organ pipes) and diapasons for the bulk of the other manual work.
  • Trombones on the pedals. These are a huge sound, 32foot with a hit of 64. These do have something of the fog horn to them.
  • Orchestral Oboes coming in as a separate voice later in the piece [at 6:40 minutes]. 

This is a full beans, socks blown off version.

Delicate:
  • Flute sounds almost too pure through most of the manuals.
  • Pedals on a brighter but windy flute.
  • Vox Humana (really, voice synthesis) coming in as a separate voice later in the piece [at 6:40, they replace the Orchestral Oboes]. 

This attempts to envision a ancient organ and and singers quietly performing this piece in a intimate and beautiful way.

Romantic:
  • Diapasons throughout, mainly these are quite a soft diapason.
  • At the 6:40 moment a slightly brighter Diapason sound enters.

Really, it is that simple. This version is as though it were played in a completely traditional style using nothing but standard flue pipes. Such a performance has something of the Italian to it.

I did find secretes in there I did not expect. The 'romantic' version has a passion of its own and the 'delicate' shows an inner beauty of which I was not aware. I admit though, massive still 'does it for me'!



Saturday, 8 November 2014

Well Temperament

Most pre 20th century we hear is out of tune!

Temperament (in music) is the moving of pitch away from Just Intonation to allow more flexible harmonies and/or more range chromatic expression on a keyboard. Nowadays we think of tuning mainly in terms of equal temperament (et). However, this tuning system is really quite badly out of tune. The thing is that it is exactly the same amount out for tune for the same interval (say a major third) in very key. This makes et very flexible. It allows the complex forms of 12 tone music which became popular amongst composers (not sure about audiences) in the 20th century.

Sadly, et is not how organs, harpsichords and even pianos were tuned in earlier centuries. Therefore, the music composed then was not written to be played in et. Indeed, much of the subtlety of that music is destroyed by et.

Earlier music had different temperaments. In this write up I am going to look at Werckmerster III, a 'Well Temperament' which has sweeter, more harmonic sounding intervals for most of the popular intervals in Bach's music. It is probably very close to the temperament he would have had his organs tuned to.

Aria from the Goldberg Variations in 
Well Temperament (Werckmeister III)

Midi represents notes as a number starting from 0. 0 is the lowest C and 1 is C# and so on. This is usually considered to be an equal temperament form where each note is exactly the twelfth root of two higher than the previous. However, there are other ways of interpreting these numbers; it is quite straight forward to map them to an alternative tuning. Here is some code to do that:

def WerckmeisterIII(key):
    key=float(key)
    cent=2.0**(1.0/1200.0)
    #Pitch:  C   C#     D       Eb      E       F       F#      G       G#      A       A#      B       C
    cents=[  0,  90.225,192.18, 294.135,390.225,498.045,588.27, 696.09, 792.18, 888.27, 996.09, 1092.18,1200]
    octave=math.floor(key/12.0)
    pitch=base*2.0**octave
    note=int(key-octave*12)
    pitch*=cent**cents[note]
    return pitch
    
def Equal(key):
    key=float(key)

    return(sf.Semitone(0)**key) * base

So a word on 'cents'. A cent is the 1200th root of 2. 
  1. An octave is the frequency ration of 2:1.
  2. We can say a semitone is a 1/12 of that. Therefore each semitone is 2**1/12 i.e. the twelfth root of two.
  3. A cent is a 1/100 of a semitone so it becomes 2**1/1200.

Now we have a handy measure of very small ratios of pitch we can use it to define different tunings. In et every semitone is 100 cents so the scale looks like this:
C     0
C#  100
D   200
Eb  300
E   400
F   500
F#  600
G   700
G#  800
A   900
A# 1000
B  1100

In Werkmeister III we have this (I have added the et at the third column for comparison):
C    0.000    
C#  90.225  100
D  192.180  200
Eb 294.135  300
E  390.225  400
F  498.045  500
F# 588.270  600
G  696.090  700
G# 792.18   800
A  888.27   900
A# 996.09  1000
B  092.18  1100

Ae we can see, the difference is tiny. Even the larger differences, for example F, are only around 12 cents apart. That represents about 1/8th of a semi-tone. However, the over all impact of the temperament is enormous. Here I have rendered BWV 478 (Come Sweet Death) in Well Temperament (first) and then Equal Temperament (second). Only the temperament is different, everything else about the two is identical:

Well Temperament

Equal Temperament