StereoFeedbackEffect < Options >

Just like a stereo feedback effect, but the feedback is routed from left to right and right to left instead of on the same channel.

+--------------------------------+ feedbackL <-----------------------------------+ | | +--> +-----> +----> +---+ feedbackMerge +--> split (EFFECT) merge +--> feedbackSplit +--> +-----> +----> +---+ | | +--------------------------------+ feedbackR <-----------------------------------+

Hierarchy

Constructor

new StereoFeedbackEffect ( ) => StereoFeedbackEffect

Properties

blockTime #

readonly Seconds

The number of seconds of 1 processing block (128 samples)


console.log(Tone.Destination.blockTime);

channelCount #

number

channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.

channelCountMode #

ChannelCountMode

channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.

  • "max" - computedNumberOfChannels is the maximum of the number of channels of all connections to an input. In this mode channelCount is ignored.
  • "clamped-max" - computedNumberOfChannels is determined as for "max" and then clamped to a maximum value of the given channelCount.
  • "explicit" - computedNumberOfChannels is the exact value as specified by the channelCount.

channelInterpretation #

ChannelInterpretation

channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".

context #

BaseContext

The context belonging to the node.

debug #

boolean

Set this debug flag to log all events that happen in this class.

disposed #

readonly boolean

Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.

feedback #

Signal<"normalRange" >

The amount of feedback from the output back into the input of the effect (routed across left and right channels).

name #

string

numberOfInputs #

readonly number

The number of inputs feeding into the AudioNode. For source nodes, this will be 0.


const node = new Tone.Gain();
console.log(node.numberOfInputs);

numberOfOutputs #

readonly number

The number of outputs of the AudioNode.


const node = new Tone.Gain();
console.log(node.numberOfOutputs);

sampleTime #

readonly Seconds

The duration in seconds of one sample.


console.log(Tone.Transport.sampleTime);

static version #

string

The version number semver

wet #

Signal<"normalRange" >

The wet control, i.e. how much of the effected will pass through to the output.

Methods

chain #

Connect the output of this node to the rest of the nodes in series.


const player = new Tone.Player("https://tonejs.github.io/examples/audio/FWDL.mp3");
player.autostart = true;
const filter = new Tone.AutoFilter(4).start();
const distortion = new Tone.Distortion(0.5);
// connect the player to the filter, distortion and then to the master output
player.chain(filter, distortion, Tone.Destination);
chain (
...nodes:InputNode []
) => this

connect #

connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode

connect (
destination:InputNode ,

The output to connect to

outputNum= 0:number ,

The output to connect from

inputNum= 0:number

The input to connect to

) => this

disconnect #

disconnect the output

disconnect (
destination?:InputNode ,
outputNum= 0:number ,
inputNum= 0:number
) => this

dispose #

Dispose and disconnect

dispose ( ) => this

fan #

connect the output of this node to the rest of the nodes in parallel.


const player = new Tone.Player("https://tonejs.github.io/examples/audio/FWDL.mp3");
player.autostart = true;
const pitchShift = new Tone.PitchShift(4).toDestination();
const filter = new Tone.Filter("G5").toDestination();
// connect a node to the pitch shift and filter in parallel
player.fan(pitchShift, filter);
fan (
...nodes:InputNode []
) => this

get #

Get the object's attributes.


const osc = new Tone.Oscillator();
console.log(osc.get());
get ( ) => Options

static getDefaults #

Returns all of the default options belonging to the class.

getDefaults ( ) => StereoFeedbackEffectOptions

immediate #

Return the current time of the Context clock without any lookAhead.


setInterval(() => {
	console.log(Tone.immediate());
}, 100);
immediate ( ) => Seconds

now #

Return the current time of the Context clock plus the lookAhead.


setInterval(() => {
	console.log(Tone.now());
}, 100);
now ( ) => Seconds

set #

Set multiple properties at once with an object.


const filter = new Tone.Filter();
// set values using an object
filter.set({
	frequency: 300,
	type: "highpass"
});
set (
props:RecursivePartial<Options >
) => this

toDestination #

Connect the output to the context's destination node.


const osc = new Tone.Oscillator("C2").start();
osc.toDestination();
toDestination ( ) => this

toFrequency #

Convert the input to a frequency number


const gain = new Tone.Gain();
console.log(gain.toFrequency("4n"));
toFrequency (
freq:Frequency
) => Hertz

toMaster # DEPRECATED

Connect the output to the context's destination node. See toDestination

toMaster ( ) => this

toSeconds #

Convert the incoming time to seconds


const gain = new Tone.Gain();
console.log(gain.toSeconds("4n"));
toSeconds (
time?:Time
) => Seconds

toString #

Convert the class to a string


const osc = new Tone.Oscillator();
console.log(osc.toString());
toString ( ) => string

toTicks #

Convert the input time into ticks


const gain = new Tone.Gain();
console.log(gain.toTicks("4n"));
toTicks (
time?:Time | TimeClass
) => Ticks