PolySynth < Voice >

PolySynth handles voice creation and allocation for any instruments passed in as the second paramter. PolySynth is not a synthesizer by itself, it merely manages voices of one of the other types of synths, allowing any of the monophonic synthesizers to be polyphonic.


import { PolySynth } from "tone";
const synth = new PolySynth().toDestination();
// set the attributes across all the voices using 'set'
synth.set({ detune: -1200 });
// play a chord
synth.triggerAttackRelease(["C4", "E4", "A4"], 1);

Hierarchy

Constructor

new PolySynth (
voice?:VoiceConstructor<Voice > ,

The constructor of the voices

options?:PartialVoiceOptions<Voice >

The options object to set the synth voice

) => PolySynth
new PolySynth (
options?:Partial<PolySynthOptions<Voice > >

The options object to set the synth voice

) => PolySynth

Properties

activeVoices #

readonly number

The number of active voices.

blockTime #

readonly Seconds

The number of seconds of 1 processing block (128 samples)

channelCount #

number

channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.

channelCountMode #

ChannelCountMode

channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.

  • "max" - computedNumberOfChannels is the maximum of the number of channels of all connections to an input. In this mode channelCount is ignored.
  • "clamped-max" - computedNumberOfChannels is determined as for "max" and then clamped to a maximum value of the given channelCount.
  • "explicit" - computedNumberOfChannels is the exact value as specified by the channelCount.

channelInterpretation #

ChannelInterpretation

channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".

context #

BaseContext

The context belonging to the node.

debug #

boolean

Set this debug flag to log all events that happen in this class.

disposed #

readonly boolean

Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.

input #

undefined

The instrument only has an output

maxPolyphony #

number

The polyphony limit.

name #

string

numberOfInputs #

readonly number

The number of inputs feeding into the AudioNode. For source nodes, this will be 0.

numberOfOutputs #

readonly number

The number of outputs of the AudioNode.

sampleTime #

readonly Seconds

The duration in seconds of one sample.

static version #

string

The version number semver

volume #

Param<"decibels" >

The volume of the output in decibels.


import { AMSynth } from "tone";
const amSynth = new AMSynth().toDestination();
amSynth.volume.value = -6;
amSynth.triggerAttackRelease("G#3", 0.2);

Methods

chain #

Connect the output of this node to the rest of the nodes in series.


import { Destination, Filter, Oscillator, Volume } from "tone";
const osc = new Oscillator().start();
const filter = new Filter();
const volume = new Volume(-8);
// connect a node to the filter, volume and then to the master output
osc.chain(filter, volume, Destination);
chain (
...nodes:InputNode []
) => this

connect #

connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode

connect (
destination:InputNode ,

The output to connect to

outputNum= 0:number ,

The output to connect from

inputNum= 0:number

The input to connect to

) => this

disconnect #

disconnect the output

disconnect (
destination?:InputNode ,
outputNum= 0:number ,
inputNum= 0:number
) => this

dispose #

clean up

dispose ( ) => this

fan #

connect the output of this node to the rest of the nodes in parallel.

fan (
...nodes:InputNode []
) => this

get #

Get the object's attributes.


import { Oscillator } from "tone";
const osc = new Oscillator();
console.log(osc.get());
// returns {"type" : "sine", "frequency" : 440, ...etc}
get ( ) => VoiceOptions<Voice >

static getDefaults #

Returns all of the default options belonging to the class.

getDefaults ( ) => PolySynthOptions<Synth >

immediate #

Return the current time of the Context clock without any lookAhead.

immediate ( ) => Seconds

now #

Return the current time of the Context clock plus the lookAhead.

now ( ) => Seconds

releaseAll #

Trigger the release portion of all the currently active voices immediately. Useful for silencing the synth.

releaseAll ( ) => this

set #

Set a member/attribute of the voices


import { PolySynth } from "tone";
const poly = new PolySynth().toDestination();
// set all of the voices using an options object for the synth type
poly.set({
	envelope: {
		attack: 0.25
	}
});
poly.triggerAttackRelease("Bb3", 0.2);
set (
options:RecursivePartial<VoiceOptions<Voice > >
) => this

sync #

Sync the instrument to the Transport. All subsequent calls of triggerAttack and triggerRelease will be scheduled along the transport.


import { FMSynth, Transport } from "tone";
const fmSynth = new FMSynth().toDestination();
fmSynth.volume.value = -6;
fmSynth.sync();
// schedule 3 notes when the transport first starts
fmSynth.triggerAttackRelease("C4", "8n", 0);
fmSynth.triggerAttackRelease("E4", "8n", "8n");
fmSynth.triggerAttackRelease("G4", "8n", "4n");
// start the transport to hear the notes
Transport.start();
sync ( ) => this

toDestination #

Connect the output to the context's destination node.

toDestination ( ) => this

toFrequency #

Convert the input to a frequency number

toFrequency (
freq:Frequency
) => Hertz

toMaster # DEPRECATED

Connect the output to the context's destination node. See toDestination

toMaster ( ) => this

toSeconds #

Convert the incoming time to seconds

toSeconds (
time?:Time
) => Seconds

toString #

Convert the class to a string


import { Oscillator } from "tone";
const osc = new Oscillator();
console.log(osc.toString());
toString ( ) => string

toTicks #

Convert the input time into ticks

toTicks (
time?:Time | TimeClass
) => Ticks

triggerAttack #

Trigger the attack portion of the note


import { FMSynth, now, PolySynth } from "tone";
const synth = new PolySynth(FMSynth).toDestination();
// trigger a chord immediately with a velocity of 0.2
synth.triggerAttack(["Ab3", "C4", "F5"], now(), 0.2);
triggerAttack (
notes:Frequency | Frequency [] ,

The notes to play. Accepts a single Frequency or an array of frequencies.

time?:Time ,

The start time of the note.

velocity?:NormalRange

The velocity of the note.

) => this

triggerAttackRelease #

Trigger the attack and release after the specified duration


import { AMSynth, PolySynth } from "tone";
const poly = new PolySynth(AMSynth).toDestination();
// can pass in an array of durations as well
poly.triggerAttackRelease(["Eb3", "G4", "Bb4", "D5"], [4, 3, 2, 1]);
triggerAttackRelease (
notes:Frequency | Frequency [] ,

The notes to play. Accepts a single Frequency or an array of frequencies.

duration:Time | Time [] ,

the duration of the note

time?:Time ,

if no time is given, defaults to now

velocity?:NormalRange

the velocity of the attack (0-1)

) => this

triggerRelease #

Trigger the release of the note. Unlike monophonic instruments, a note (or array of notes) needs to be passed in as the first argument.


import { AMSynth, PolySynth } from "tone";
const poly = new PolySynth(AMSynth).toDestination();
poly.triggerAttack(["Ab3", "C4", "F5"]);
// trigger the release of the given notes.
poly.triggerRelease(["Ab3", "C4"], "+1");
poly.triggerRelease("F5", "+3");
triggerRelease (
notes:Frequency | Frequency [] ,

The notes to play. Accepts a single Frequency or an array of frequencies.

time?:Time

When the release will be triggered.

) => this

unsync #

Unsync the instrument from the Transport

unsync ( ) => this