Optional
options: Partial<PolySynthOptions<Voice>>Readonly
contextThe context belonging to the node.
Set this debug flag to log all events that happen in this class.
The instrument only has an output
The polyphony limit.
Readonly
nameThe volume of the output in decibels.
const amSynth = new Tone.AMSynth().toDestination();
amSynth.volume.value = -6;
amSynth.triggerAttackRelease("G#3", 0.2);
Static
versionThe version number semver
The number of active voices.
The number of seconds of 1 processing block (128 samples)
console.log(Tone.Destination.blockTime);
channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.
channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.
channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".
Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.
The number of inputs feeding into the AudioNode. For source nodes, this will be 0.
const node = new Tone.Gain();
console.log(node.numberOfInputs);
The number of outputs of the AudioNode.
const node = new Tone.Gain();
console.log(node.numberOfOutputs);
The duration in seconds of one sample.
Connect the output of this node to the rest of the nodes in series.
Rest
...nodes: InputNode[]const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/handdrum-loop.mp3");
player.autostart = true;
const filter = new Tone.AutoFilter(4).start();
const distortion = new Tone.Distortion(0.5);
// connect the player to the filter, distortion and then to the master output
player.chain(filter, distortion, Tone.Destination);
connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode
The output to connect to
The output to connect from
The input to connect to
disconnect the output
Optional
destination: InputNodeconnect the output of this node to the rest of the nodes in parallel.
Rest
...nodes: InputNode[]const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3");
player.autostart = true;
const pitchShift = new Tone.PitchShift(4).toDestination();
const filter = new Tone.Filter("G5").toDestination();
// connect a node to the pitch shift and filter in parallel
player.fan(pitchShift, filter);
Set a member/attribute of the voices
const poly = new Tone.PolySynth().toDestination();
// set all of the voices using an options object for the synth type
poly.set({
envelope: {
attack: 0.25
}
});
poly.triggerAttackRelease("Bb3", 0.2);
Convert the incoming time to seconds. This is calculated against the current TransportClass bpm
const gain = new Tone.Gain();
setInterval(() => console.log(gain.toSeconds("4n")), 100);
// ramp the tempo to 60 bpm over 30 seconds
Tone.getTransport().bpm.rampTo(60, 30);
Trigger the attack and release after the specified duration
const poly = new Tone.PolySynth(Tone.AMSynth).toDestination();
// can pass in an array of durations as well
poly.triggerAttackRelease(["Eb3", "G4", "Bb4", "D5"], [4, 3, 2, 1]);
Trigger the release of the note. Unlike monophonic instruments, a note (or array of notes) needs to be passed in as the first argument.
const poly = new Tone.PolySynth(Tone.AMSynth).toDestination();
poly.triggerAttack(["Ab3", "C4", "F5"]);
// trigger the release of the given notes.
poly.triggerRelease(["Ab3", "C4"], "+1");
poly.triggerRelease("F5", "+3");
Static
get
PolySynth handles voice creation and allocation for any instruments passed in as the second parameter. PolySynth is not a synthesizer by itself, it merely manages voices of one of the other types of synths, allowing any of the monophonic synthesizers to be polyphonic.
Example