Three streams of shofar sounds are recorded in real-time, each processed using a hi-pass filter.
Each stream is digitally processed with hi-pass, low-pass and comb filters and a series of harmonizer algorithms and a cluster of time stretch and multi-tap delays.
The discrete parameters governing the algorithms are not subject to direct control by the performer.
Changes in each parameter are governed by one of ten pre-recorded graphs of values unfolding on a timeline.
The assignment of a particular graph to a specific parameter is governed by a random decision, which periodically changes.
The rate of change of parameter values is related to the analysis of the degree of noise elements in the performed shofar sounds. A noise quotient is assessed using an analysis algorithm designed by Tristan Jehan as a Max/MSP object called analyzer~. More noise, i.e. breathiness, slows down the pace of the timeline.
A sound "intensity" rating (average pitch plus one half of a loudness factor) is randomly assigned to a discrete processor parameter or a cluster of parameters. Loud and high pitched = "intense".
Thus the performer has limited influence on what processor is being influenced by the real-time performance gesture.