FrequencyShifter

FrequencyShifter can be used to shift all frequencies of a signal by a fixed amount. The amount can be changed at audio rate and the effect is applied in real time. The frequency shifting is implemented with a technique called single side band modulation using a ring modulator. Note: Contrary to pitch shifting, all frequencies are shifted by the same amount, destroying the harmonic relationship between them. This leads to the classic ring modulator timbre distortion. The algorithm will produces some aliasing towards the high end, especially if your source material contains a lot of high frequencies. Unfortunatelly the webaudio API does not support resampling buffers in real time, so it is not possible to fix it properly. Depending on the use case it might be an option to low pass filter your input before frequency shifting it to get ride of the aliasing. You can find a very detailed description of the algorithm here: https://larzeitlin.github.io/RMFS/


const input = new Tone.Oscillator(230, "sawtooth").start();
const shift = new Tone.FrequencyShifter(42).toDestination();
input.connect(shift);

Hierarchy

Constructor

new FrequencyShifter (
frequency?:Frequency

The incoming signal is shifted by this frequency value.

) => FrequencyShifter
new FrequencyShifter (
options?:Partial<FrequencyShifterOptions >
) => FrequencyShifter

Properties

blockTime #

readonly Seconds

The number of seconds of 1 processing block (128 samples)


console.log(Tone.Destination.blockTime);

channelCount #

number

channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.

channelCountMode #

ChannelCountMode

channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.

  • "max" - computedNumberOfChannels is the maximum of the number of channels of all connections to an input. In this mode channelCount is ignored.
  • "clamped-max" - computedNumberOfChannels is determined as for "max" and then clamped to a maximum value of the given channelCount.
  • "explicit" - computedNumberOfChannels is the exact value as specified by the channelCount.

channelInterpretation #

ChannelInterpretation

channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".

context #

BaseContext

The context belonging to the node.

debug #

boolean

Set this debug flag to log all events that happen in this class.

disposed #

readonly boolean

Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.

frequency #

Signal<"frequency" >

The ring modulators carrier frequency. This frequency determines by how many Hertz the input signal will be shifted up or down. Default is 0.

input #

Gain

The effect input node

numberOfInputs #

readonly number

The number of inputs feeding into the AudioNode. For source nodes, this will be 0.


const node = new Tone.Gain();
console.log(node.numberOfInputs);

numberOfOutputs #

readonly number

The number of outputs of the AudioNode.


const node = new Tone.Gain();
console.log(node.numberOfOutputs);

output #

CrossFade

The effect output

sampleTime #

readonly Seconds

The duration in seconds of one sample.


console.log(Tone.Transport.sampleTime);

static version #

string

The version number semver

wet #

Signal<"normalRange" >

The wet control is how much of the effected will pass through to the output. 1 = 100% effected signal, 0 = 100% dry signal.

Methods

chain #

Connect the output of this node to the rest of the nodes in series.


const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/handdrum-loop.mp3");
player.autostart = true;
const filter = new Tone.AutoFilter(4).start();
const distortion = new Tone.Distortion(0.5);
// connect the player to the filter, distortion and then to the master output
player.chain(filter, distortion, Tone.Destination);
chain (
...nodes:InputNode []
) => this

connect #

connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode

connect (
destination:InputNode ,

The output to connect to

outputNum= 0:number ,

The output to connect from

inputNum= 0:number

The input to connect to

) => this

disconnect #

disconnect the output

disconnect (
destination?:InputNode ,
outputNum= 0:number ,
inputNum= 0:number
) => this

dispose #

Dispose and disconnect

dispose ( ) => this

fan #

connect the output of this node to the rest of the nodes in parallel.


const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3");
player.autostart = true;
const pitchShift = new Tone.PitchShift(4).toDestination();
const filter = new Tone.Filter("G5").toDestination();
// connect a node to the pitch shift and filter in parallel
player.fan(pitchShift, filter);
fan (
...nodes:InputNode []
) => this

get #

Get the object's attributes.


const osc = new Tone.Oscillator();
console.log(osc.get());

static getDefaults #

Returns all of the default options belonging to the class.

getDefaults ( ) => FrequencyShifterOptions

immediate #

Return the current time of the Context clock without any lookAhead.


setInterval(() => {
	console.log(Tone.immediate());
}, 100);
immediate ( ) => Seconds

now #

Return the current time of the Context clock plus the lookAhead.


setInterval(() => {
	console.log(Tone.now());
}, 100);
now ( ) => Seconds

set #

Set multiple properties at once with an object.


const filter = new Tone.Filter().toDestination();
// set values using an object
filter.set({
	frequency: "C6",
	type: "highpass"
});
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/Analogsynth_octaves_highmid.mp3").connect(filter);
player.autostart = true;
set ( ) => this

toDestination #

Connect the output to the context's destination node.


const osc = new Tone.Oscillator("C2").start();
osc.toDestination();
toDestination ( ) => this

toFrequency #

Convert the input to a frequency number


const gain = new Tone.Gain();
console.log(gain.toFrequency("4n"));
toFrequency (
freq:Frequency
) => Hertz

toMaster # DEPRECATED

Connect the output to the context's destination node. See toDestination

toMaster ( ) => this

toSeconds #

Convert the incoming time to seconds. This is calculated against the current Tone.Transport bpm


const gain = new Tone.Gain();
setInterval(() => console.log(gain.toSeconds("4n")), 100);
// ramp the tempo to 60 bpm over 30 seconds
Tone.getTransport().bpm.rampTo(60, 30);
toSeconds (
time?:Time
) => Seconds

toString #

Convert the class to a string


const osc = new Tone.Oscillator();
console.log(osc.toString());
toString ( ) => string

toTicks #

Convert the input time into ticks


const gain = new Tone.Gain();
console.log(gain.toTicks("4n"));
toTicks (
time?:Time | TimeClass
) => Ticks