Get the current frequency data of the connected audio source using a fast Fourier transform.
The number of seconds of 1 processing block (128 samples)
console.log(Tone.Destination.blockTime);
channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.
channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.
channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".
The context belonging to the node.
Set this debug flag to log all events that happen in this class.
Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.
The signal to be analysed
If the output should be in decibels or normal range between 0-1. If normalRange
is false,
the output range will be the measured decibel value, otherwise the decibel value will be converted to
the range of 0-1
The number of inputs feeding into the AudioNode. For source nodes, this will be 0.
const node = new Tone.Gain();
console.log(node.numberOfInputs);
The number of outputs of the AudioNode.
const node = new Tone.Gain();
console.log(node.numberOfOutputs);
The output is just a pass through of the input
The duration in seconds of one sample.
console.log(Tone.Transport.sampleTime);
The size of analysis. This must be a power of two in the range 16 to 16384. Determines the size of the array returned by getValue (i.e. the number of frequency bins). Large FFT sizes may be costly to compute.
0 represents no time averaging with the last analysis frame.
The version number semver
Connect the output of this node to the rest of the nodes in series.
const player = new Tone.Player("https://tonejs.github.io/examples/audio/FWDL.mp3");
player.autostart = true;
const filter = new Tone.AutoFilter(4).start();
const distortion = new Tone.Distortion(0.5);
// connect the player to the filter, distortion and then to the master output
player.chain(filter, distortion, Tone.Destination);
connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode
The output to connect from
The input to connect to
disconnect the output
Dispose and disconnect
connect the output of this node to the rest of the nodes in parallel.
const player = new Tone.Player("https://tonejs.github.io/examples/audio/FWDL.mp3");
player.autostart = true;
const pitchShift = new Tone.PitchShift(4).toDestination();
const filter = new Tone.Filter("G5").toDestination();
// connect a node to the pitch shift and filter in parallel
player.fan(pitchShift, filter);
Get the object's attributes.
const osc = new Tone.Oscillator();
console.log(osc.get());
Returns all of the default options belonging to the class.
Returns the frequency value in hertz of each of the indices of the FFT's getValue response.
const fft = new Tone.FFT(32);
console.log([0, 1, 2, 3, 4].map(index => fft.getFrequencyOfIndex(index)));
Gets the current frequency data from the connected audio source. Returns the frequency data of length size as a Float32Array of decibel values.
Return the current time of the Context clock without any lookAhead.
setInterval(() => {
console.log(Tone.immediate());
}, 100);
Return the current time of the Context clock plus the lookAhead.
setInterval(() => {
console.log(Tone.now());
}, 100);
Set multiple properties at once with an object.
const filter = new Tone.Filter();
// set values using an object
filter.set({
frequency: 300,
type: "highpass"
});
Connect the output to the context's destination node.
const osc = new Tone.Oscillator("C2").start();
osc.toDestination();
Convert the input to a frequency number
const gain = new Tone.Gain();
console.log(gain.toFrequency("4n"));
Connect the output to the context's destination node. See toDestination
Convert the incoming time to seconds
const gain = new Tone.Gain();
console.log(gain.toSeconds("4n"));
Convert the class to a string
const osc = new Tone.Oscillator();
console.log(osc.toString());
Convert the input time into ticks
const gain = new Tone.Gain();
console.log(gain.toTicks("4n"));