Class AudioToGain

AudioToGain converts an input in AudioRange [-1,1] to NormalRange [0,1].

Hierarchy

Constructors

Properties

context: BaseContext

The context belonging to the node.

debug: boolean = false

Set this debug flag to log all events that happen in this class.

input: WaveShaper = ...

The AudioRange input [-1, 1]

name: string = "AudioToGain"
output: WaveShaper = ...

The GainRange output [0, 1]

version: string = version

The version number semver

Accessors

  • get blockTime(): number
  • The number of seconds of 1 processing block (128 samples)

    Returns number

    Example

    console.log(Tone.Destination.blockTime);
    
  • get channelCount(): number
  • channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.

    Returns number

  • set channelCount(channelCount): void
  • Parameters

    • channelCount: number

    Returns void

  • get channelCountMode(): ChannelCountMode
  • channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.

    • "max" - computedNumberOfChannels is the maximum of the number of channels of all connections to an input. In this mode channelCount is ignored.
    • "clamped-max" - computedNumberOfChannels is determined as for "max" and then clamped to a maximum value of the given channelCount.
    • "explicit" - computedNumberOfChannels is the exact value as specified by the channelCount.

    Returns ChannelCountMode

  • set channelCountMode(channelCountMode): void
  • Parameters

    • channelCountMode: ChannelCountMode

    Returns void

  • get channelInterpretation(): ChannelInterpretation
  • channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".

    Returns ChannelInterpretation

  • set channelInterpretation(channelInterpretation): void
  • Parameters

    • channelInterpretation: ChannelInterpretation

    Returns void

  • get disposed(): boolean
  • Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.

    Returns boolean

  • get numberOfInputs(): number
  • The number of inputs feeding into the AudioNode. For source nodes, this will be 0.

    Returns number

    Example

    const node = new Tone.Gain();
    console.log(node.numberOfInputs);
  • get numberOfOutputs(): number
  • The number of outputs of the AudioNode.

    Returns number

    Example

    const node = new Tone.Gain();
    console.log(node.numberOfOutputs);

Methods

  • Connect the output of this node to the rest of the nodes in series.

    Parameters

    Returns this

    Example

    const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/handdrum-loop.mp3");
    player.autostart = true;
    const filter = new Tone.AutoFilter(4).start();
    const distortion = new Tone.Distortion(0.5);
    // connect the player to the filter, distortion and then to the master output
    player.chain(filter, distortion, Tone.Destination);
  • Parameters

    • destination: InputNode
    • outputNum: number = 0
    • inputNum: number = 0

    Returns this

  • disconnect the output

    Parameters

    • Optional destination: InputNode
    • outputNum: number = 0
    • inputNum: number = 0

    Returns this

  • connect the output of this node to the rest of the nodes in parallel.

    Parameters

    Returns this

    Example

    const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3");
    player.autostart = true;
    const pitchShift = new Tone.PitchShift(4).toDestination();
    const filter = new Tone.Filter("G5").toDestination();
    // connect a node to the pitch shift and filter in parallel
    player.fan(pitchShift, filter);
  • Get the object's attributes.

    Returns ToneWithContextOptions

    Example

    const osc = new Tone.Oscillator();
    console.log(osc.get());
  • Return the current time of the Context clock without any lookAhead.

    Returns number

    Example

    setInterval(() => {
    console.log(Tone.immediate());
    }, 100);
  • Return the current time of the Context clock plus the lookAhead.

    Returns number

    Example

    setInterval(() => {
    console.log(Tone.now());
    }, 100);
  • Set multiple properties at once with an object.

    Parameters

    • props: RecursivePartial<ToneWithContextOptions>

    Returns this

    Example

    const filter = new Tone.Filter().toDestination();
    // set values using an object
    filter.set({
    frequency: "C6",
    type: "highpass"
    });
    const player = new Tone.Player("https://tonejs.github.io/audio/berklee/Analogsynth_octaves_highmid.mp3").connect(filter);
    player.autostart = true;
  • Connect the output to the context's destination node.

    Returns this

    Example

    const osc = new Tone.Oscillator("C2").start();
    osc.toDestination();
  • Convert the input to a frequency number

    Parameters

    Returns number

    Example

    const gain = new Tone.Gain();
    console.log(gain.toFrequency("4n"));
  • Convert the incoming time to seconds. This is calculated against the current TransportClass bpm

    Parameters

    Returns number

    Example

    const gain = new Tone.Gain();
    setInterval(() => console.log(gain.toSeconds("4n")), 100);
    // ramp the tempo to 60 bpm over 30 seconds
    Tone.getTransport().bpm.rampTo(60, 30);
  • Convert the class to a string

    Returns string

    Example

    const osc = new Tone.Oscillator();
    console.log(osc.toString());
  • Convert the input time into ticks

    Parameters

    Returns number

    Example

    const gain = new Tone.Gain();
    console.log(gain.toTicks("4n"));