Why Smart Delay?
​​
​
The truth is: there is literally no way for the sound of a particular note's sample to be idiomatically correct unless you know what happens after that note finishes. Does it lead to a rest (silence)? Is it the middle of a long string of fast notes? Does it leap up or down? As you can see from the image here, taken from a random stream of connected legato notes. Ay given pitch contains
​
Example of full-length sample with transitions on both the beginning and end of notes. Notice all the interesting, almost random, dynamic changes throughout the body of the note.
​We think that sample libraries can sound better than they traditionally have (and presently do from other developers). We also think that that composers want to write music and hear it played as if live musicians had performed it. This results in both greater creative fulfillment and economical advantage for music makers of all types and levels. We are not on a mission to replace live players, rather, we want to enable more composers to create the music they want and need to create without the barrier to entry that would otherwise be imposed. Any Sample Library company could make this claim though. What makes us any different?
​
The reason the level of realism we look to achieve it is so difficult to do. And what most other developers miss, can be boiled down to the countless tiny variations and changes that happen as an instrumentalist moves from one note to another. These changes are different depending on things like :
-
the tempo,
-
tessitura
-
the interval that was played was before
-
the interval played after
-
the dynamic
-
where in the phrase the note is, downbeat upbeat?
-
the length of the note
-
and much more...
​
Why Modeling Isn't The Answer
We'll be the first to admit that the burgening world of instrument modeling is very exciting and a very cool area of development. And we're looking forward to see where it goes. But why don't we go down that road?
​
The other truth is: if a musician is reading music, their eyes are always at least a note or two ahead of where they're currently playing. Because what they're about to play affects the way they're playing the notes right now. And even if they're not reading music, they are thinking or subconsciously aware of what's coming next. They don't conceive of the note at the exact instant they play it.
​
So, while modeling is very interesting and yields some cool sounds and a lot of fun for musicians who enjoy playing them, it will never be able to contain all those idiosyncratic things that actual players really do when they play in the heat of the moment or with as much variety.
​
The only way to get all those intuitive details, is to record a real player, playing in real-time, in the heat of the moment and extract the samples from that performance and then re-assemble them later when the end user plays a new phrase.
How Do We Record For Smart Delay?
​​​
1. We record many different lengths of notes. Players actually articulate and sustain notes very differently depending on how long they play them. Moreover, standard sustain samples, where the player performs an isolated long note, are only realistic and useful once the notes get to a certain length. If these standard long sustains are used for these shorter legato notes, the result is a flat, life-less and altogether fake sounding representation.
2. We record legato in and out of every note. So much of what gives each note of a phrase it’s unique sound, and why it’s so hard to replicate, is the preparation for the next note. Players do so many small changes as the next note is coming. Not only that, but it’s mostly involuntary and happens without them thinking (especially as the rhythm gets faster), so asking a player “play an 8th note” in a sampling won’t even close to get you the breadth of possibilities of what that same player might play in any given situation.
- To illustrate it more clearly, for every pitch there is a sample with a note from a whole step to a whole step, and from a whole step to a 3rd, and from a whole step to a 6th. Conversely, there’s sample from a 3rd to a whole step, and from a 6th to a whole, etc, etc...
3. We record different samples for where they are in the phrase. To use the 8th note example again: if you had a musician play three 8th notes in a row. Even if they’re identical length, pitch, dynamic, they will be performed totally differently. And neither of the 3 samples would suffice and sound realistic as a replacement for the other two instances.
Smart Delay v1.0
Smart Delay v2.0
This is the original version of smart delay created for the jazz centric line of brass and woodwinds we refer to as "The New Standard."
The instrument loads in "Real-Time" mode and initial functions like a standard sample library. When the Smart Delay Switch is engaged, the sound is delayed by 4 beats (according to the DAW grid and tempo) so that the script can analyze the musical context. To compensate for this delay, the user must move the midi region four beats earlier ( left) or apply the "Non-Smart Delay Track Sync" plugin (VST3/AU) to any Non-Smart Delay tracks.
​​​
This is the second generation of Smart Delay. It initially loads in "real-time" and functions like a traditional sample library. When Smart Delay is engaged, the user inserts the included "Smart Delay Offset" plugin in to compensate and the DAW will then make the necessary playback processing time adjustment behind the scenes. The midi region may remain in its original position and both sounds and appears in sync with other tracks.. When playback on the project is stopped the instrument automatically reverts in real-time mode and then Smart Delay is applied automatically on playback. Bypassing is the plugin may be necessary when recording or punching in, but not on playback or editing.