WaveShaper

Wraps the native Web Audio API WaveShaperNode.


const osc = new Tone.Oscillator().toDestination().start();
// multiply the output of the signal by 2 using the waveshaper's function
const timesTwo = new Tone.WaveShaper((val) => val * 2, 2048).connect(osc.frequency);
const signal = new Tone.Signal(440).connect(timesTwo);

Hierarchy

Constructor

new WaveShaper (
mapping?:WaveShaperMapping ,

The function used to define the values.The mapping function should take two arguments:the first is the value at the current positionand the second is the array position.If the argument is an array, that array will beset as the wave shaping function. The inputsignal is an AudioRange [-1, 1] value and the outputsignal can take on any numerical values.

length?:undefined | number
) => WaveShaper
new WaveShaper (
options?:Partial<WaveShaperOptions >
) => WaveShaper

Properties

blockTime #

readonly Seconds

The number of seconds of 1 processing block (128 samples)


console.log(Tone.Destination.blockTime);

channelCount #

number

channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.

channelCountMode #

ChannelCountMode

channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs.

  • "max" - computedNumberOfChannels is the maximum of the number of channels of all connections to an input. In this mode channelCount is ignored.
  • "clamped-max" - computedNumberOfChannels is determined as for "max" and then clamped to a maximum value of the given channelCount.
  • "explicit" - computedNumberOfChannels is the exact value as specified by the channelCount.

channelInterpretation #

ChannelInterpretation

channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".

context #

BaseContext

The context belonging to the node.

curve #

Float32Array | null

The array to set as the waveshaper curve. For linear curves array length does not make much difference, but for complex curves longer arrays will provide smoother interpolation.

debug #

boolean

Set this debug flag to log all events that happen in this class.

disposed #

readonly boolean

Indicates if the instance was disposed. 'Disposing' an instance means that all of the Web Audio nodes that were created for the instance are disconnected and freed for garbage collection.

input #

WaveShaperNode

The input to the waveshaper node.

numberOfInputs #

readonly number

The number of inputs feeding into the AudioNode. For source nodes, this will be 0.


const node = new Tone.Gain();
console.log(node.numberOfInputs);

numberOfOutputs #

readonly number

The number of outputs of the AudioNode.


const node = new Tone.Gain();
console.log(node.numberOfOutputs);

output #

WaveShaperNode

The output from the waveshaper node

oversample #

OverSampleType

Specifies what type of oversampling (if any) should be used when applying the shaping curve. Can either be "none", "2x" or "4x".

sampleTime #

readonly Seconds

The duration in seconds of one sample.


console.log(Tone.Transport.sampleTime);

static version #

string

The version number semver

Methods

chain #

Connect the output of this node to the rest of the nodes in series.


const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/handdrum-loop.mp3");
player.autostart = true;
const filter = new Tone.AutoFilter(4).start();
const distortion = new Tone.Distortion(0.5);
// connect the player to the filter, distortion and then to the master output
player.chain(filter, distortion, Tone.Destination);
chain (
...nodes:InputNode []
) => this

connect #

connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode

connect (
destination:InputNode ,
outputNum= 0:number ,
inputNum= 0:number
) => this

disconnect #

disconnect the output

disconnect (
destination?:InputNode ,
outputNum= 0:number ,
inputNum= 0:number
) => this

dispose #

Clean up.

dispose ( ) => this

fan #

connect the output of this node to the rest of the nodes in parallel.


const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3");
player.autostart = true;
const pitchShift = new Tone.PitchShift(4).toDestination();
const filter = new Tone.Filter("G5").toDestination();
// connect a node to the pitch shift and filter in parallel
player.fan(pitchShift, filter);
fan (
...nodes:InputNode []
) => this

get #

Get the object's attributes.


const osc = new Tone.Oscillator();
console.log(osc.get());
get ( ) => WaveShaperOptions

static getDefaults #

Returns all of the default options belonging to the class.

getDefaults ( ) => WaveShaperOptions

immediate #

Return the current time of the Context clock without any lookAhead.


setInterval(() => {
	console.log(Tone.immediate());
}, 100);
immediate ( ) => Seconds

now #

Return the current time of the Context clock plus the lookAhead.


setInterval(() => {
	console.log(Tone.now());
}, 100);
now ( ) => Seconds

set #

Set multiple properties at once with an object.


const filter = new Tone.Filter().toDestination();
// set values using an object
filter.set({
	frequency: "C6",
	type: "highpass"
});
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/Analogsynth_octaves_highmid.mp3").connect(filter);
player.autostart = true;
set ( ) => this

setMap #

Uses a mapping function to set the value of the curve.


const shaper = new Tone.WaveShaper();
// map the input signal from [-1, 1] to [0, 10]
shaper.setMap((val, index) => (val + 1) * 5);
setMap (
mapping:WaveShaperMappingFn ,

The function used to define the values.The mapping function take two arguments:the first is the value at the current positionwhich goes from -1 to 1 over the number of elementsin the curve array. The second argument is the array position.

length= 1024:number
) => this

toDestination #

Connect the output to the context's destination node.


const osc = new Tone.Oscillator("C2").start();
osc.toDestination();
toDestination ( ) => this

toFrequency #

Convert the input to a frequency number


const gain = new Tone.Gain();
console.log(gain.toFrequency("4n"));
toFrequency (
freq:Frequency
) => Hertz

toMaster # DEPRECATED

Connect the output to the context's destination node. See toDestination

toMaster ( ) => this

toSeconds #

Convert the incoming time to seconds. This is calculated against the current Tone.Transport bpm


const gain = new Tone.Gain();
setInterval(() => console.log(gain.toSeconds("4n")), 100);
// ramp the tempo to 60 bpm over 30 seconds
Tone.getTransport().bpm.rampTo(60, 30);
toSeconds (
time?:Time
) => Seconds

toString #

Convert the class to a string


const osc = new Tone.Oscillator();
console.log(osc.toString());
toString ( ) => string

toTicks #

Convert the input time into ticks


const gain = new Tone.Gain();
console.log(gain.toTicks("4n"));
toTicks (
time?:Time | TimeClass
) => Ticks