Friday 3 April 2015

Midi processing for fretted string and keyboard

Just because Sonic Field has not been advancing in the synthesis systems recently does not imply it is standing still. My current efforts have all been in midi (just about).

Working with the Waldorf Pulse 2 has been altogether too much fun! However, it is a mono synth and I want to make polyphonic, multi instrument sounds. The simple solution to this is over dubbing. Here we play each sound separately through the synth and then mix the result up. That is a very well known and understood technique. However, there is more to it than we might think.

Consider just one instrument, which we might think of as monophonic. The flute is a good example. We might believe that the flute plays one note at once. The truth is that the sounds it makes are more intermixed and as the player changes notes on a flute the pitch does not smoothly move from one note to the next like glide on a synthesiser. Even if we are not going for a perfect flute synthesis, to play flute music can still require overdubbing. So far I have found that playing alternate notes and overdubbing the result seems to work very nicely.


This piece synthesises flute and guitar using
 the midi techniques discussed here.

Then we have those instruments which can indeed play more than one note at once. The guitar and harpsichord (piano etc) are all interesting examples. To keep things simple, let us imagine we have a single manual harpsichord. If we play this sequence:

A4 G4 C5 G3 A4

Each note consists, in a simple minded way, of an ADSR envelope where A is almost infinitely short (actually, a tiny bit of attack slop can sound a bit better as a quill pluck is not really a click).

Something quite piano like:

|   /\ 
|   | \
|   |  ------------------
|   |                    \
|   |                     \
+---------------------------
    ADDSSSSSSSSSSSSSSSSSSRR

More of a harpsichord or classical guitar

|   /\ 
|   | -
|   |  \
|   |   -   
|   |    --\
+------------
    ADDDDDDR

Even with the best efforts to damp a string, there is still some release. The body, sounding board and reverberation of the rest of the instrument will sound the string a little after release. So we have two different ways in which notes will overlap. These instruments (with the exception of the guitar - which I am still working on) can play any notes overlapping apart from the same note. However, we only have so many figures or strings so for the harpsichord or piano we can play a maximum of 10 notes at once and the guitar 6.

Now we have the basis for our overdub approach, we can rotate through up to 10 different recording. If two notes are not overlapping, we do not need to rotate the second note to a new recording. But, what the midi says is overlapping is not quite true due to the release phase, we need more overlap than we might think. This means we might need more recordings. However, we can reuse recordings more efficiently when the same note is played twice in close succession.

This is what this piece of Sonic Field code is attempting to do. I used it with 8 recordings to perform the guitar int eh Albinoni Sonata For Guitar and Flute above. As I said, I am still working on the guitar because both the sound and overdub techniques are still some what closer to the harpsichord than guitar.

# Full polyphony
# Release sets the minimum time between notes on a
# particular voice unless those notes are the same
# key, which alows for the sound to go through its
# release phase. Voices sets the absoulte maximum
# number of voices which will be used before falling
# back to round robbin. 12 is a popular 'as big as you 
# need' value for polyphonic synths so I have picked
# that as the default. Release is in ticks.
#
# At the moment it is not clever enough to take account
# of tempo change events
def playNotes(release=120,sameRelease=60,voices=12,pan=True):
    chans=[]
    for v in range(0,voices):
        chans.append([])
    
    voice=0
    vs=[]
    rota=0
    for event in midis[midiNo]:
        if Midi.isNote(event):
            notFound=True
            for v in range(0,2*(voices+1)):
                second=v>voices                
                v+=rota
                v=v%voices
                chan=chans[v]
                if chan:
                    top=chan[-1]['tick-off']
                    next=event['tick']
                    if chan[-1]['key']==event['key']:
                        if top+sameRelease<next:
                            notFound=False
                            rota+=1                        
                            break
                    if top+release<next and second:
                        rota+=1
                        notFound=False
                        break
                else:
                    notFound=False
                    break

            if notFound:
                v=rota
                v=v%voices
                rota+=1
            vs.append(v)
            chans[v].append(event)
    print "Voicing:",vs

    for chan in chans:
        print chan    

    sout=Midi.blankSequence(sequence)  
    # Create the timing information track
    tout=sout.createTrack()
    for event in midis[0]:
        if Midi.isMeta(event):
            if Midi.isTempo(event) or Midi.isTimeSignature(event):
                tout.add(event['event'])
    for chan in chans:
        if chan:

            events=[]
            for event in chan:
                ev1=event['event']
                ev2=event['event-off']
                key=event['key']
                panev=127.0*float(key-minKey)/float(maxKey-minKey)
                panev=31+pan/2
                panev=int(pan)
                panev=Midi.makePan(1,ev1.getTick()-1,pan)
                if pan:
                    events.append(pan)
                events.append(event['event'])
                events.append(event['event-off'])

            events=sorted(events,key=lambda event: event.getTick())

            # Create note track
            tout=sout.createTrack()
            Midi.addPan(tout,1,100,64)
            Midi.addNote(tout,1,offset/2,(offset/2)+2000,50,100)

            for event in events:
                Midi.setChannel(event,1)
                tout.add(event)
    Midi.writeMidiFile("temp/temp.midi",sout)


    nChan=0
    for chan in chans:
        if chan:
            nChan+=1
            print "Performing Channel: :",chan
            events=[]
            for event in chan:
                ev1=event['event']
                ev2=event['event-off']
                key=event['key']
                pan=127.0*float(key-minKey)/float(maxKey-minKey)
                pan=31+pan/2
                pan=int(pan)
                pan=Midi.makePan(1,ev1.getTick()-1,pan)
                events.append(pan)
                events.append(event['event'])
                events.append(event['event-off'])

            events=sorted(events,key=lambda event: event.getTick())
                    
            sout=Midi.blankSequence(sequence)  

            # Create the timing information track
            tout=sout.createTrack()
            for event in midis[0]:
                if Midi.isMeta(event):
                    if Midi.isTempo(event) or Midi.isTimeSignature(event):
                        tout.add(event['event'])

            # Create note track
            tout=sout.createTrack()
            Midi.addPan(tout,1,100,64)
            Midi.addNote(tout,1,offset/2,(offset/2)+20,50,100)

            for event in events:
                Midi.setChannel(event,1)
                tout.add(event)
            player=Midi.getPlayer(sequencer,synthesiser)
            player.play(sout)
            player.waitFor()
    

playNotes(release=60,sameRelease=0,voices=8,pan=False)