Optional
type: AnalyserTypeThe return type of the analysis, either "fft", or "waveform".
Optional
size: numberThe size of the FFT. This must be a power of two in the range 16 to 16384.
Optional
options: Partial<AnalyserOptions>Readonly
contextThe context belonging to the node.
Set this debug flag to log all events that happen in this class.
Readonly
inputThe input node or nodes. If the object is a source, it does not have any input and this.input is undefined.
Readonly
nameThe name of the class
Readonly
outputThe output nodes. If the object is a sink, it does not have any output and this.output is undefined.
Static
versionThe version number semver
The number of seconds of 1 processing block (128 samples)
console.log(Tone.Destination.blockTime);
channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.
channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.
channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".
The number of channels the analyser does the analysis on. Channel separation is done using Split
Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.
The number of inputs feeding into the AudioNode. For source nodes, this will be 0.
const node = new Tone.Gain();
console.log(node.numberOfInputs);
The number of outputs of the AudioNode.
const node = new Tone.Gain();
console.log(node.numberOfOutputs);
The duration in seconds of one sample.
The size of analysis. This must be a power of two in the range 16 to 16384.
0 represents no time averaging with the last analysis frame.
The analysis function returned by analyser.getValue(), either "fft" or "waveform".
Connect the output of this node to the rest of the nodes in series.
Rest
...nodes: InputNode[]const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/handdrum-loop.mp3");
player.autostart = true;
const filter = new Tone.AutoFilter(4).start();
const distortion = new Tone.Distortion(0.5);
// connect the player to the filter, distortion and then to the master output
player.chain(filter, distortion, Tone.Destination);
connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode
The output to connect to
The output to connect from
The input to connect to
disconnect the output
Optional
destination: InputNodeconnect the output of this node to the rest of the nodes in parallel.
Rest
...nodes: InputNode[]const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3");
player.autostart = true;
const pitchShift = new Tone.PitchShift(4).toDestination();
const filter = new Tone.Filter("G5").toDestination();
// connect a node to the pitch shift and filter in parallel
player.fan(pitchShift, filter);
Get the object's attributes.
const osc = new Tone.Oscillator();
console.log(osc.get());
Run the analysis given the current settings. If channels = 1, it will return a Float32Array. If channels > 1, it will return an array of Float32Arrays where each index in the array represents the analysis done on a channel.
Set multiple properties at once with an object.
const filter = new Tone.Filter().toDestination();
// set values using an object
filter.set({
frequency: "C6",
type: "highpass"
});
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/Analogsynth_octaves_highmid.mp3").connect(filter);
player.autostart = true;
Convert the incoming time to seconds. This is calculated against the current TransportClass bpm
const gain = new Tone.Gain();
setInterval(() => console.log(gain.toSeconds("4n")), 100);
// ramp the tempo to 60 bpm over 30 seconds
Tone.getTransport().bpm.rampTo(60, 30);
Static
get
Wrapper around the native Web Audio's AnalyserNode. Extracts FFT or Waveform data from the incoming signal.