tone
- Version 15.0.4
- Published
- 5.4 MB
- 2 dependencies
- MIT license
Install
npm i tone
yarn add tone
pnpm add tone
Overview
A Web Audio framework for making interactive music in the browser.
Index
Variables
Functions
- connect()
- connectSeries()
- connectSignal()
- dbToGain()
- disconnect()
- fanIn()
- Frequency()
- ftom()
- gainToDb()
- getContext()
- getDestination()
- getDraw()
- getListener()
- getTransport()
- immediate()
- intervalToFrequencyRatio()
- isArray()
- isBoolean()
- isDefined()
- isFunction()
- isNote()
- isNumber()
- isObject()
- isString()
- isUndef()
- loaded()
- Midi()
- mtof()
- now()
- Offline()
- setContext()
- start()
- Ticks()
- Time()
- TransportTime()
Classes
BaseContext
- addAudioWorkletModule()
- clearInterval()
- clearTimeout()
- createAnalyser()
- createAudioWorkletNode()
- createBiquadFilter()
- createBuffer()
- createBufferSource()
- createChannelMerger()
- createChannelSplitter()
- createConstantSource()
- createConvolver()
- createDelay()
- createDynamicsCompressor()
- createGain()
- createIIRFilter()
- createMediaElementSource()
- createMediaStreamDestination()
- createMediaStreamSource()
- createOscillator()
- createPanner()
- createPeriodicWave()
- createStereoPanner()
- createWaveShaper()
- currentTime
- decodeAudioData()
- destination
- draw
- getConstant()
- immediate()
- isOffline
- latencyHint
- listener
- lookAhead
- now()
- rawContext
- resume()
- sampleRate
- setInterval()
- setTimeout()
- state
- toJSON()
- transport
Context
- addAudioWorkletModule()
- clearInterval()
- clearTimeout()
- clockSource
- close()
- createAnalyser()
- createAudioWorkletNode()
- createBiquadFilter()
- createBuffer()
- createBufferSource()
- createChannelMerger()
- createChannelSplitter()
- createConstantSource()
- createConvolver()
- createDelay()
- createDynamicsCompressor()
- createGain()
- createIIRFilter()
- createMediaElementSource()
- createMediaStreamDestination()
- createMediaStreamSource()
- createOscillator()
- createPanner()
- createPeriodicWave()
- createStereoPanner()
- createWaveShaper()
- currentTime
- decodeAudioData()
- destination
- dispose()
- draw
- getConstant()
- getDefaults()
- immediate()
- isOffline
- latencyHint
- listener
- lookAhead
- name
- now()
- rawContext
- resume()
- sampleRate
- setInterval()
- setTimeout()
- state
- transport
- updateInterval
- workletsAreReady()
Param
- apply()
- cancelAndHoldAtTime()
- cancelScheduledValues()
- convert
- defaultValue
- dispose()
- exponentialApproachValueAtTime()
- exponentialRampTo()
- exponentialRampToValueAtTime()
- getDefaults()
- getValueAtTime()
- input
- linearRampTo()
- linearRampToValueAtTime()
- maxValue
- minValue
- name
- overridden
- rampTo()
- setParam()
- setRampPoint()
- setTargetAtTime()
- setValueAtTime()
- setValueCurveAtTime()
- targetRampTo()
- units
- value
Signal
- apply()
- cancelAndHoldAtTime()
- cancelScheduledValues()
- connect()
- convert
- dispose()
- exponentialApproachValueAtTime()
- exponentialRampTo()
- exponentialRampToValueAtTime()
- getDefaults()
- getValueAtTime()
- input
- linearRampTo()
- linearRampToValueAtTime()
- maxValue
- minValue
- name
- output
- overridden
- override
- rampTo()
- setRampPoint()
- setTargetAtTime()
- setValueAtTime()
- setValueCurveAtTime()
- targetRampTo()
- units
- value
Interfaces
Type Aliases
- AMSynthOptions
- AnalyserType
- AutomationEvent
- BaseAudioContextSubset
- BasicPlaybackState
- ContextLatencyHint
- DCMeterOptions
- EnvelopeCurve
- ExcludedFromBaseAudioContext
- FilterOptions
- FilterRollOff
- FrequencyUnit
- GreaterThanOptions
- GreaterThanZeroOptions
- InputNode
- LFOOptions
- MidSideMergeOptions
- MidSideSplitOptions
- MonoOptions
- NoiseType
- OmniOscillatorOptions
- OmniOscSourceType
- OnePoleFilterType
- OutputNode
- PlaybackState
- ToneAudioNodeOptions
- ToneBufferSourceCurve
- ToneEventCallback
- ToneOscillatorType
- WaveShaperMappingFn
Namespaces
Variables
variable Buffer
const Buffer: typeof ToneAudioBuffer;
Deprecated
Use ToneAudioBuffer
variable Buffers
const Buffers: typeof ToneAudioBuffers;
Deprecated
Use ToneAudioBuffers
variable BufferSource
const BufferSource: typeof ToneBufferSource;
Deprecated
Use ToneBufferSource
variable context
const context: BaseContext;
variable Destination
const Destination: DestinationClass;
The Destination (output) belonging to the global Tone.js Context.
See Also
DestinationClass Core
Deprecated
Use getDestination instead
variable Draw
const Draw: DrawClass;
variable Listener
const Listener: ListenerClass;
The ListenerClass belonging to the global Tone.js Context. Core
Deprecated
Use getListener instead
variable Master
const Master: DestinationClass;
Deprecated
Use getDestination instead
variable Transport
const Transport: TransportClass;
The Transport object belonging to the global Tone.js Context.
See Also
TransportClass Core
Deprecated
Use getTransport instead
variable version
const version: string;
Functions
function connect
connect: ( srcNode: OutputNode, dstNode: InputNode, outputNumber?: number, inputNumber?: number) => void;
Connect two nodes together so that signal flows from the first node to the second. Optionally specify the input and output channels.
Parameter srcNode
The source node
Parameter dstNode
The destination node
Parameter outputNumber
The output channel of the srcNode
Parameter inputNumber
The input channel of the dstNode
function connectSeries
connectSeries: (...nodes: InputNode[]) => void;
connect together all of the arguments in series
Parameter nodes
function connectSignal
connectSignal: ( signal: OutputNode, destination: InputNode, outputNum?: number, inputNum?: number) => void;
When connecting from a signal, it's necessary to zero out the node destination node if that node is also a signal. If the destination is not 0, then the values will be summed. This method insures that the output of the destination signal will be the same as the source signal, making the destination signal a pass through node.
Parameter signal
The output signal to connect from
Parameter destination
the destination to connect to
Parameter outputNum
the optional output number
Parameter inputNum
the input number
function dbToGain
dbToGain: (db: Decibels) => GainFactor;
Convert decibels into gain.
function disconnect
disconnect: ( srcNode: OutputNode, dstNode?: InputNode, outputNumber?: number, inputNumber?: number) => void;
Disconnect a node from all nodes or optionally include a destination node and input/output channels.
Parameter srcNode
The source node
Parameter dstNode
The destination node
Parameter outputNumber
The output channel of the srcNode
Parameter inputNumber
The input channel of the dstNode
function fanIn
fanIn: (...nodes: OutputNode[]) => void;
Connect the output of one or more source nodes to a single destination node
Parameter nodes
One or more source nodes followed by one destination node
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3"); const player1 = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3"); const filter = new Tone.Filter("G5").toDestination(); // connect nodes to a common destination Tone.fanIn(player, player1, filter);
function Frequency
Frequency: ( value?: TimeValue | Frequency, units?: FrequencyUnit) => FrequencyClass;
Convert a value into a FrequencyClass object. Unit
Example 1
const midi = Tone.Frequency("C3").toMidi(); console.log(midi);
Example 2
const hertz = Tone.Frequency(38, "midi").toFrequency(); console.log(hertz);
function ftom
ftom: (frequency: Hertz) => MidiNote;
Convert a frequency value to a MIDI note.
Parameter frequency
The value to frequency value to convert.
Example 1
Tone.ftom(440); // returns 69
function gainToDb
gainToDb: (gain: GainFactor) => Decibels;
Convert gain to decibels.
function getContext
getContext: () => BaseContext;
Returns the default system-wide Context Core
function getDestination
getDestination: () => DestinationClass;
The Destination (output) belonging to the global Tone.js Context.
See Also
DestinationClass Core
function getDraw
getDraw: () => DrawClass;
Get the singleton attached to the global context. Draw is used to synchronize the draw frame with the Transport's callbacks.
See Also
DrawClass Core
function getListener
getListener: () => ListenerClass;
The ListenerClass belonging to the global Tone.js Context. Core
function getTransport
getTransport: () => TransportClass;
The Transport object belonging to the global Tone.js Context.
See Also
TransportClass Core
function immediate
immediate: () => Seconds;
The current audio context time of the global Context without the Context.lookAhead
See Also
Context.immediate Core
function intervalToFrequencyRatio
intervalToFrequencyRatio: (interval: Interval) => number;
Convert an interval (in semitones) to a frequency ratio.
Parameter interval
the number of semitones above the base note
Example 1
Tone.intervalToFrequencyRatio(0); // 1 Tone.intervalToFrequencyRatio(12); // 2 Tone.intervalToFrequencyRatio(-12); // 0.5
function isArray
isArray: (arg: any) => arg is any[];
Test if the argument is an Array
function isBoolean
isBoolean: (arg: any) => arg is boolean;
Test if the argument is a boolean.
function isDefined
isDefined: <T>(arg: T | undefined) => arg is T;
Test if the arg is not undefined
function isFunction
isFunction: (arg: any) => arg is (a: any) => any;
Test if the arg is a function
function isNote
isNote: ( arg: any) => arg is | 'C0' | 'C1' | 'C2' | 'C3' | 'C4' | 'C5' | 'C6' | 'C7' | 'C8' | 'C9' | 'C10' | 'C11' | 'C-4' | 'C-3' | 'C-2' | 'C-1' | 'Cbb0' | 'Cbb1' | 'Cbb2' | 'Cbb3' | 'Cbb4' | 'Cbb5' | 'Cbb6' | 'Cbb7' | 'Cbb8' | 'Cbb9' | 'Cbb10' | 'Cbb11' | 'Cbb-4' | 'Cbb-3' | 'Cbb-2' | 'Cbb-1' | 'Cb0' | 'Cb1' | 'Cb2' | 'Cb3' | 'Cb4' | 'Cb5' | 'Cb6' | 'Cb7' | 'Cb8' | 'Cb9' | 'Cb10' | 'Cb11' | 'Cb-4' | 'Cb-3' | 'Cb-2' | 'Cb-1' | 'C#0' | 'C#1' | 'C#2' | 'C#3' | 'C#4' | 'C#5' | 'C#6' | 'C#7' | 'C#8' | 'C#9' | 'C#10' | 'C#11' | 'C#-4' | 'C#-3' | 'C#-2' | 'C#-1' | 'Cx0' | 'Cx1' | 'Cx2' | 'Cx3' | 'Cx4' | 'Cx5' | 'Cx6' | 'Cx7' | 'Cx8' | 'Cx9' | 'Cx10' | 'Cx11' | 'Cx-4' | 'Cx-3' | 'Cx-2' | 'Cx-1' | 'D0' | 'D1' | 'D2' | 'D3' | 'D4' | 'D5' | 'D6' | 'D7' | 'D8' | 'D9' | 'D10' | 'D11' | 'D-4' | 'D-3' | 'D-2' | 'D-1' | 'Dbb0' | 'Dbb1' | 'Dbb2' | 'Dbb3' | 'Dbb4' | 'Dbb5' | 'Dbb6' | 'Dbb7' | 'Dbb8' | 'Dbb9' | 'Dbb10' | 'Dbb11' | 'Dbb-4' | 'Dbb-3' | 'Dbb-2' | 'Dbb-1' | 'Db0' | 'Db1' | 'Db2' | 'Db3' | 'Db4' | 'Db5' | 'Db6' | 'Db7' | 'Db8' | 'Db9' | 'Db10' | 'Db11' | 'Db-4' | 'Db-3' | 'Db-2' | 'Db-1' | 'D#0' | 'D#1' | 'D#2' | 'D#3' | 'D#4' | 'D#5' | 'D#6' | 'D#7' | 'D#8' | 'D#9' | 'D#10' | 'D#11' | 'D#-4' | 'D#-3' | 'D#-2' | 'D#-1' | 'Dx0' | 'Dx1' | 'Dx2' | 'Dx3' | 'Dx4' | 'Dx5' | 'Dx6' | 'Dx7' | 'Dx8' | 'Dx9' | 'Dx10' | 'Dx11' | 'Dx-4' | 'Dx-3' | 'Dx-2' | 'Dx-1' | 'E0' | 'E1' | 'E2' | 'E3' | 'E4' | 'E5' | 'E6' | 'E7' | 'E8' | 'E9' | 'E10' | 'E11' | 'E-4' | 'E-3' | 'E-2' | 'E-1' | 'Ebb0' | 'Ebb1' | 'Ebb2' | 'Ebb3' | 'Ebb4' | 'Ebb5' | 'Ebb6' | 'Ebb7' | 'Ebb8' | 'Ebb9' | 'Ebb10' | 'Ebb11' | 'Ebb-4' | 'Ebb-3' | 'Ebb-2' | 'Ebb-1' | 'Eb0' | 'Eb1' | 'Eb2' | 'Eb3' | 'Eb4' | 'Eb5' | 'Eb6' | 'Eb7' | 'Eb8' | 'Eb9' | 'Eb10' | 'Eb11' | 'Eb-4' | 'Eb-3' | 'Eb-2' | 'Eb-1' | 'E#0' | 'E#1' | 'E#2' | 'E#3' | 'E#4' | 'E#5' | 'E#6' | 'E#7' | 'E#8' | 'E#9' | 'E#10' | 'E#11' | 'E#-4' | 'E#-3' | 'E#-2' | 'E#-1' | 'Ex0' | 'Ex1' | 'Ex2' | 'Ex3' | 'Ex4' | 'Ex5' | 'Ex6' | 'Ex7' | 'Ex8' | 'Ex9' | 'Ex10' | 'Ex11' | 'Ex-4' | 'Ex-3' | 'Ex-2' | 'Ex-1' | 'F0' | 'F1' | 'F2' | 'F3' | 'F4' | 'F5' | 'F6' | 'F7' | 'F8' | 'F9' | 'F10' | 'F11' | 'F-4' | 'F-3' | 'F-2' | 'F-1' | 'Fbb0' | 'Fbb1' | 'Fbb2' | 'Fbb3' | 'Fbb4' | 'Fbb5' | 'Fbb6' | 'Fbb7' | 'Fbb8' | 'Fbb9' | 'Fbb10' | 'Fbb11' | 'Fbb-4' | 'Fbb-3' | 'Fbb-2' | 'Fbb-1' | 'Fb0' | 'Fb1' | 'Fb2' | 'Fb3' | 'Fb4' | 'Fb5' | 'Fb6' | 'Fb7' | 'Fb8' | 'Fb9' | 'Fb10' | 'Fb11' | 'Fb-4' | 'Fb-3' | 'Fb-2' | 'Fb-1' | 'F#0' | 'F#1' | 'F#2' | 'F#3' | 'F#4' | 'F#5' | 'F#6' | 'F#7' | 'F#8' | 'F#9' | 'F#10' | 'F#11' | 'F#-4' | 'F#-3' | 'F#-2' | 'F#-1' | 'Fx0' | 'Fx1' | 'Fx2' | 'Fx3' | 'Fx4' | 'Fx5' | 'Fx6' | 'Fx7' | 'Fx8' | 'Fx9' | 'Fx10' | 'Fx11' | 'Fx-4' | 'Fx-3' | 'Fx-2' | 'Fx-1' | 'G0' | 'G1' | 'G2' | 'G3' | 'G4' | 'G5' | 'G6' | 'G7' | 'G8' | 'G9' | 'G10' | 'G11' | 'G-4' | 'G-3' | 'G-2' | 'G-1' | 'Gbb0' | 'Gbb1' | 'Gbb2' | 'Gbb3' | 'Gbb4' | 'Gbb5' | 'Gbb6' | 'Gbb7' | 'Gbb8' | 'Gbb9' | 'Gbb10' | 'Gbb11' | 'Gbb-4' | 'Gbb-3' | 'Gbb-2' | 'Gbb-1' | 'Gb0' | 'Gb1' | 'Gb2' | 'Gb3' | 'Gb4' | 'Gb5' | 'Gb6' | 'Gb7' | 'Gb8' | 'Gb9' | 'Gb10' | 'Gb11' | 'Gb-4' | 'Gb-3' | 'Gb-2' | 'Gb-1' | 'G#0' | 'G#1' | 'G#2' | 'G#3' | 'G#4' | 'G#5' | 'G#6' | 'G#7' | 'G#8' | 'G#9' | 'G#10' | 'G#11' | 'G#-4' | 'G#-3' | 'G#-2' | 'G#-1' | 'Gx0' | 'Gx1' | 'Gx2' | 'Gx3' | 'Gx4' | 'Gx5' | 'Gx6' | 'Gx7' | 'Gx8' | 'Gx9' | 'Gx10' | 'Gx11' | 'Gx-4' | 'Gx-3' | 'Gx-2' | 'Gx-1' | 'A0' | 'A1' | 'A2' | 'A3' | 'A4' | 'A5' | 'A6' | 'A7' | 'A8' | 'A9' | 'A10' | 'A11' | 'A-4' | 'A-3' | 'A-2' | 'A-1' | 'Abb0' | 'Abb1' | 'Abb2' | 'Abb3' | 'Abb4' | 'Abb5' | 'Abb6' | 'Abb7' | 'Abb8' | 'Abb9' | 'Abb10' | 'Abb11' | 'Abb-4' | 'Abb-3' | 'Abb-2' | 'Abb-1' | 'Ab0' | 'Ab1' | 'Ab2' | 'Ab3' | 'Ab4' | 'Ab5' | 'Ab6' | 'Ab7' | 'Ab8' | 'Ab9' | 'Ab10' | 'Ab11' | 'Ab-4' | 'Ab-3' | 'Ab-2' | 'Ab-1' | 'A#0' | 'A#1' | 'A#2' | 'A#3' | 'A#4' | 'A#5' | 'A#6' | 'A#7' | 'A#8' | 'A#9' | 'A#10' | 'A#11' | 'A#-4' | 'A#-3' | 'A#-2' | 'A#-1' | 'Ax0' | 'Ax1' | 'Ax2' | 'Ax3' | 'Ax4' | 'Ax5' | 'Ax6' | 'Ax7' | 'Ax8' | 'Ax9' | 'Ax10' | 'Ax11' | 'Ax-4' | 'Ax-3' | 'Ax-2' | 'Ax-1' | 'B0' | 'B1' | 'B2' | 'B3' | 'B4' | 'B5' | 'B6' | 'B7' | 'B8' | 'B9' | 'B10' | 'B11' | 'B-4' | 'B-3' | 'B-2' | 'B-1' | 'Bbb0' | 'Bbb1' | 'Bbb2' | 'Bbb3' | 'Bbb4' | 'Bbb5' | 'Bbb6' | 'Bbb7' | 'Bbb8' | 'Bbb9' | 'Bbb10' | 'Bbb11' | 'Bbb-4' | 'Bbb-3' | 'Bbb-2' | 'Bbb-1' | 'Bb0' | 'Bb1' | 'Bb2' | 'Bb3' | 'Bb4' | 'Bb5' | 'Bb6' | 'Bb7' | 'Bb8' | 'Bb9' | 'Bb10' | 'Bb11' | 'Bb-4' | 'Bb-3' | 'Bb-2' | 'Bb-1' | 'B#0' | 'B#1' | 'B#2' | 'B#3' | 'B#4' | 'B#5' | 'B#6' | 'B#7' | 'B#8' | 'B#9' | 'B#10' | 'B#11' | 'B#-4' | 'B#-3' | 'B#-2' | 'B#-1' | 'Bx0' | 'Bx1' | 'Bx2' | 'Bx3' | 'Bx4' | 'Bx5' | 'Bx6' | 'Bx7' | 'Bx8' | 'Bx9' | 'Bx10' | 'Bx11' | 'Bx-4' | 'Bx-3' | 'Bx-2' | 'Bx-1';
Test if the argument is in the form of a note in scientific pitch notation. e.g. "C4"
function isNumber
isNumber: (arg: any) => arg is number;
Test if the argument is a number.
function isObject
isObject: (arg: any) => arg is object;
Test if the given argument is an object literal (i.e.
{}
);
function isString
isString: (arg: any) => arg is string;
Test if the argument is a string.
function isUndef
isUndef: (arg: any) => arg is undefined;
Test if the arg is undefined
function loaded
loaded: () => Promise<void>;
Promise which resolves when all of the loading promises are resolved. Alias for static ToneAudioBuffer.loaded method. Core
function Midi
Midi: (value?: TimeValue, units?: FrequencyUnit) => MidiClass;
Convert a value into a FrequencyClass object. Unit
function mtof
mtof: (midi: MidiNote) => Hertz;
Convert a MIDI note to frequency value.
Parameter midi
The midi number to convert. The corresponding frequency value
Example 1
Tone.mtof(69); // 440
function now
now: () => Seconds;
The current audio context time of the global BaseContext.
See Also
Context.now Core
function Offline
Offline: ( callback: (context: OfflineContext) => Promise<void> | void, duration: Seconds, channels?: number, sampleRate?: number) => Promise<ToneAudioBuffer>;
Generate a buffer by rendering all of the Tone.js code within the callback using the OfflineAudioContext. The OfflineAudioContext is capable of rendering much faster than real time in many cases. The callback function also passes in an offline instance of Context which can be used to schedule events along the Transport.
Parameter callback
All Tone.js nodes which are created and scheduled within this callback are recorded into the output Buffer.
Parameter duration
the amount of time to record for. The promise which is invoked with the ToneAudioBuffer of the recorded output.
Example 1
// render 2 seconds of the oscillator Tone.Offline(() => { // only nodes created in this callback will be recorded const oscillator = new Tone.Oscillator().toDestination().start(0); }, 2).then((buffer) => { // do something with the output buffer console.log(buffer); });
Example 2
// can also schedule events along the Transport // using the passed in Offline Transport Tone.Offline(({ transport }) => { const osc = new Tone.Oscillator().toDestination(); transport.schedule(time => { osc.start(time).stop(time + 0.1); }, 1); // make sure to start the transport transport.start(0.2); }, 4).then((buffer) => { // do something with the output buffer console.log(buffer); }); Core
function setContext
setContext: ( context: BaseContext | AnyAudioContext, disposeOld?: boolean) => void;
Set the default audio context
Parameter context
Parameter disposeOld
Pass
true
if you don't need the old context to dispose it. Core
function start
start: () => Promise<void>;
Most browsers will not play _any_ audio until a user clicks something (like a play button). Invoke this method on a click or keypress event handler to start the audio context. More about the Autoplay policy [here](https://developers.google.com/web/updates/2017/09/autoplay-policy-changes#webaudio)
Example 1
document.querySelector("button").addEventListener("click", async () => { await Tone.start(); console.log("context started"); }); Core
function Ticks
Ticks: (value?: TimeValue, units?: TimeBaseUnit) => TicksClass;
Convert a time representation to ticks Unit
function Time
Time: (value?: TimeValue, units?: TimeBaseUnit) => TimeClass<Seconds>;
Create a TimeClass from a time string or number. The time is computed against the global Tone.Context. To use a specific context, use TimeClass
Parameter value
A value which represents time
Parameter units
The value's units if they can't be inferred by the value. Unit
Example 1
const time = Tone.Time("4n").toSeconds(); console.log(time);
Example 2
const note = Tone.Time(1).toNotation(); console.log(note);
Example 3
const freq = Tone.Time(0.5).toFrequency(); console.log(freq);
function TransportTime
TransportTime: (value?: TimeValue, units?: TimeBaseUnit) => TransportTimeClass;
TransportTime is a time along the Transport's timeline. It is similar to Tone.Time, but instead of evaluating against the AudioContext's clock, it is evaluated against the Transport's position. See [TransportTime wiki](https://github.com/Tonejs/Tone.js/wiki/TransportTime). Unit
Classes
class Abs
class Abs extends SignalOperator<ToneAudioNodeOptions> {}
Return the absolute value of an incoming signal.
Example 1
return Tone.Offline(() => { const abs = new Tone.Abs().toDestination(); const signal = new Tone.Signal(1); signal.rampTo(-1, 0.5); signal.connect(abs); }, 0.5, 1); Signal
class Add
class Add extends Signal {}
Add a signal and a number or two signals. When no value is passed into the constructor, Tone.Add will sum input and
addend
If a value is passed into the constructor, the it will be added to the input.Example 1
return Tone.Offline(() => { const add = new Tone.Add(2).toDestination(); add.addend.setValueAtTime(1, 0.2); const signal = new Tone.Signal(2); // add a signal and a scalar signal.connect(add); signal.setValueAtTime(1, 0.1); }, 0.5, 1); Signal
constructor
constructor(value?: number);
Parameter value
If no value is provided, will sum the input and addend.
constructor
constructor(options?: Partial<SignalOptions<'number'>>);
property addend
readonly addend: Param<'number'>;
The value which is added to the input signal
property input
readonly input: Gain<'gain'>;
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
property override
override: boolean;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => SignalOptions<'number'>;
class AMOscillator
class AMOscillator extends Source<AMOscillatorOptions> implements ToneOscillatorInterface {}
An amplitude modulated oscillator node. It is implemented with two oscillators, one which modulators the other's amplitude through a gain node.
+-------------+ +----------+| Carrier Osc +>------> GainNode |+-------------+ | +--->Output+---> gain |+---------------+ | +----------+| Modulator Osc +>---++---------------+Example 1
return Tone.Offline(() => { const amOsc = new Tone.AMOscillator(30, "sine", "square").toDestination().start(); }, 0.2, 1); Source
constructor
constructor( frequency?: Frequency, type?: ToneOscillatorType, modulationType?: ToneOscillatorType);
Parameter frequency
The starting frequency of the oscillator.
Parameter type
The type of the carrier oscillator.
Parameter modulationType
The type of the modulator oscillator.
constructor
constructor(options?: Partial<AMConstructorOptions>);
property baseType
baseType: OscillatorType;
property detune
readonly detune: Signal<'cents'>;
property frequency
readonly frequency: Signal<'frequency'>;
property harmonicity
readonly harmonicity: Signal<'positive'>;
Harmonicity is the frequency ratio between the carrier and the modulator oscillators. A harmonicity of 1 gives both oscillators the same frequency. Harmonicity = 2 means a change of an octave.
Example 1
const amOsc = new Tone.AMOscillator("D2").toDestination().start(); Tone.Transport.scheduleRepeat(time => { amOsc.harmonicity.setValueAtTime(1, time); amOsc.harmonicity.setValueAtTime(0.5, time + 0.5); amOsc.harmonicity.setValueAtTime(1.5, time + 1); amOsc.harmonicity.setValueAtTime(1, time + 2); amOsc.harmonicity.linearRampToValueAtTime(2, time + 4); }, 4); Tone.Transport.start();
property modulationType
modulationType: ToneOscillatorType;
The type of the modulator oscillator
property name
readonly name: string;
property partialCount
partialCount: number;
property partials
partials: number[];
property phase
phase: number;
property type
type: ToneOscillatorType;
The type of the carrier oscillator
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => AMOscillatorOptions;
class AmplitudeEnvelope
class AmplitudeEnvelope extends Envelope {}
AmplitudeEnvelope is a Tone.Envelope connected to a gain node. Unlike Tone.Envelope, which outputs the envelope's value, AmplitudeEnvelope accepts an audio signal as the input and will apply the envelope to the amplitude of the signal. Read more about ADSR Envelopes on [Wikipedia](https://en.wikipedia.org/wiki/Synthesizer#ADSR_envelope).
Example 1
return Tone.Offline(() => { const ampEnv = new Tone.AmplitudeEnvelope({ attack: 0.1, decay: 0.2, sustain: 1.0, release: 0.8 }).toDestination(); // create an oscillator and connect it const osc = new Tone.Oscillator().connect(ampEnv).start(); // trigger the envelopes attack and release "8t" apart ampEnv.triggerAttackRelease("8t"); }, 1.5, 1); Component
constructor
constructor(attack?: Time, decay?: Time, sustain?: number, release?: Time);
Parameter attack
The amount of time it takes for the envelope to go from 0 to it's maximum value.
Parameter decay
The period of time after the attack that it takes for the envelope to fall to the sustain value. Value must be greater than 0.
Parameter sustain
The percent of the maximum value that the envelope rests at until the release is triggered.
Parameter release
The amount of time after the release is triggered it takes to reach 0. Value must be greater than 0.
constructor
constructor(options?: Partial<EnvelopeOptions>);
property input
input: Gain<'gain'>;
property name
readonly name: string;
property output
output: Gain<'gain'>;
method dispose
dispose: () => this;
Clean up
class AMSynth
class AMSynth extends ModulationSynth<AMSynthOptions> {}
AMSynth uses the output of one Tone.Synth to modulate the amplitude of another Tone.Synth. The harmonicity (the ratio between the two signals) affects the timbre of the output signal greatly. Read more about Amplitude Modulation Synthesis on [SoundOnSound](https://web.archive.org/web/20160404103653/http://www.soundonsound.com:80/sos/mar00/articles/synthsecrets.htm).
Example 1
const synth = new Tone.AMSynth().toDestination(); synth.triggerAttackRelease("C4", "4n");
Instrument
constructor
constructor(options?: RecursivePartial<ModulationSynthOptions>);
property name
readonly name: string;
method dispose
dispose: () => this;
class Analyser
class Analyser extends ToneAudioNode<AnalyserOptions> {}
Wrapper around the native Web Audio's [AnalyserNode](http://webaudio.github.io/web-audio-api/#idl-def-AnalyserNode). Extracts FFT or Waveform data from the incoming signal. Component
constructor
constructor(type?: AnalyserType, size?: number);
Parameter type
The return type of the analysis, either "fft", or "waveform".
Parameter size
The size of the FFT. This must be a power of two in the range 16 to 16384.
constructor
constructor(options?: Partial<AnalyserOptions>);
property channels
readonly channels: number;
The number of channels the analyser does the analysis on. Channel separation is done using Split
property input
readonly input: InputNode;
property name
readonly name: string;
property output
readonly output: OutputNode;
property size
size: number;
The size of analysis. This must be a power of two in the range 16 to 16384.
property smoothing
smoothing: number;
0 represents no time averaging with the last analysis frame.
property type
type: AnalyserType;
The analysis function returned by analyser.getValue(), either "fft" or "waveform".
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => AnalyserOptions;
method getValue
getValue: () => Float32Array | Float32Array[];
class AudioToGain
class AudioToGain extends SignalOperator<ToneAudioNodeOptions> {}
AudioToGain converts an input in AudioRange [-1,1] to NormalRange [0,1].
See Also
GainToAudio. Signal
class AutoFilter
class AutoFilter extends LFOEffect<AutoFilterOptions> {}
AutoFilter is a Tone.Filter with a Tone.LFO connected to the filter cutoff frequency. Setting the LFO rate and depth allows for control over the filter modulation rate and depth.
Example 1
// create an autofilter and start it's LFO const autoFilter = new Tone.AutoFilter("4n").toDestination().start(); // route an oscillator through the filter and start it const oscillator = new Tone.Oscillator().connect(autoFilter).start(); Effect
constructor
constructor(frequency?: Frequency, baseFrequency?: Frequency, octaves?: number);
Parameter frequency
The rate of the LFO.
Parameter baseFrequency
The lower value of the LFOs oscillation
Parameter octaves
The number of octaves above the baseFrequency
constructor
constructor(options?: Partial<AutoFilterOptions>);
property baseFrequency
baseFrequency: Frequency;
The minimum value of the filter's cutoff frequency.
property filter
readonly filter: Filter;
The filter node
property name
readonly name: string;
property octaves
octaves: number;
The maximum value of the filter's cutoff frequency.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => AutoFilterOptions;
class AutoPanner
class AutoPanner extends LFOEffect<AutoPannerOptions> {}
AutoPanner is a Panner with an LFO connected to the pan amount. [Related Reading](https://www.ableton.com/en/blog/autopan-chopper-effect-and-more-liveschool/).
Example 1
// create an autopanner and start it const autoPanner = new Tone.AutoPanner("4n").toDestination().start(); // route an oscillator through the panner and start it const oscillator = new Tone.Oscillator().connect(autoPanner).start(); Effect
constructor
constructor(frequency?: Frequency);
Parameter frequency
Rate of left-right oscillation.
constructor
constructor(options?: Partial<AutoPannerOptions>);
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => AutoPannerOptions;
class AutoWah
class AutoWah extends Effect<AutoWahOptions> {}
AutoWah connects a Follower to a Filter. The frequency of the filter, follows the input amplitude curve. Inspiration from [Tuna.js](https://github.com/Dinahmoe/tuna).
Example 1
const autoWah = new Tone.AutoWah(50, 6, -30).toDestination(); // initialize the synth and connect to autowah const synth = new Tone.Synth().connect(autoWah); // Q value influences the effect of the wah - default is 2 autoWah.Q.value = 6; // more audible on higher notes synth.triggerAttackRelease("C4", "8n"); Effect
constructor
constructor(baseFrequency?: Frequency, octaves?: number, sensitivity?: number);
Parameter baseFrequency
The frequency the filter is set to at the low point of the wah
Parameter octaves
The number of octaves above the baseFrequency the filter will sweep to when fully open.
Parameter sensitivity
The decibel threshold sensitivity for the incoming signal. Normal range of -40 to 0.
constructor
constructor(options?: Partial<AutoWahOptions>);
property baseFrequency
baseFrequency: Frequency;
The base frequency from which the sweep will start from.
property follower
follower: Time;
The follower's smoothing time
property gain
readonly gain: Signal<'decibels'>;
The gain of the filter.
property name
readonly name: string;
property octaves
octaves: number;
The number of octaves that the filter will sweep above the baseFrequency.
property Q
readonly Q: Signal<'positive'>;
The quality of the filter.
property sensitivity
sensitivity: number;
The sensitivity to control how responsive to the input signal the filter is.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => AutoWahOptions;
class BaseContext
abstract class BaseContext extends Emitter<'statechange' | 'tick'> implements BaseAudioContextSubset {}
property currentTime
readonly currentTime: number;
property destination
readonly destination: Destination;
property draw
readonly draw: Draw;
property isOffline
readonly isOffline: boolean;
property latencyHint
abstract latencyHint: number | AudioContextLatencyCategory;
property listener
readonly listener: Listener;
property lookAhead
abstract lookAhead: number;
property rawContext
readonly rawContext: AnyAudioContext;
property sampleRate
readonly sampleRate: number;
property state
readonly state: AudioContextState;
property transport
readonly transport: Transport;
method addAudioWorkletModule
abstract addAudioWorkletModule: (_url: string) => Promise<void>;
method clearInterval
abstract clearInterval: (_id: number) => this;
method clearTimeout
abstract clearTimeout: (_id: number) => this;
method createAnalyser
abstract createAnalyser: () => AnalyserNode;
method createAudioWorkletNode
abstract createAudioWorkletNode: ( _name: string, _options?: Partial<AudioWorkletNodeOptions>) => AudioWorkletNode;
method createBiquadFilter
abstract createBiquadFilter: () => BiquadFilterNode;
method createBuffer
abstract createBuffer: ( _numberOfChannels: number, _length: number, _sampleRate: number) => AudioBuffer;
method createBufferSource
abstract createBufferSource: () => AudioBufferSourceNode;
method createChannelMerger
abstract createChannelMerger: ( _numberOfInputs?: number | undefined) => ChannelMergerNode;
method createChannelSplitter
abstract createChannelSplitter: ( _numberOfOutputs?: number | undefined) => ChannelSplitterNode;
method createConstantSource
abstract createConstantSource: () => ConstantSourceNode;
method createConvolver
abstract createConvolver: () => ConvolverNode;
method createDelay
abstract createDelay: (_maxDelayTime?: number | undefined) => DelayNode;
method createDynamicsCompressor
abstract createDynamicsCompressor: () => DynamicsCompressorNode;
method createGain
abstract createGain: () => GainNode;
method createIIRFilter
abstract createIIRFilter: ( _feedForward: number[] | Float32Array, _feedback: number[] | Float32Array) => IIRFilterNode;
method createMediaElementSource
abstract createMediaElementSource: ( _element: HTMLMediaElement) => MediaElementAudioSourceNode;
method createMediaStreamDestination
abstract createMediaStreamDestination: () => MediaStreamAudioDestinationNode;
method createMediaStreamSource
abstract createMediaStreamSource: ( _stream: MediaStream) => MediaStreamAudioSourceNode;
method createOscillator
abstract createOscillator: () => OscillatorNode;
method createPanner
abstract createPanner: () => PannerNode;
method createPeriodicWave
abstract createPeriodicWave: ( _real: number[] | Float32Array, _imag: number[] | Float32Array, _constraints?: PeriodicWaveConstraints | undefined) => PeriodicWave;
method createStereoPanner
abstract createStereoPanner: () => StereoPannerNode;
method createWaveShaper
abstract createWaveShaper: () => WaveShaperNode;
method decodeAudioData
abstract decodeAudioData: (_audioData: ArrayBuffer) => Promise<AudioBuffer>;
method getConstant
abstract getConstant: (_val: number) => AudioBufferSourceNode;
method immediate
abstract immediate: () => Seconds;
method now
abstract now: () => Seconds;
method resume
abstract resume: () => Promise<void>;
method setInterval
abstract setInterval: ( _fn: (...args: any[]) => void, _interval: Seconds) => number;
method setTimeout
abstract setTimeout: ( _fn: (...args: any[]) => void, _timeout: Seconds) => number;
method toJSON
toJSON: () => Record<string, any>;
class BiquadFilter
class BiquadFilter extends ToneAudioNode<BiquadFilterOptions> {}
Thin wrapper around the native Web Audio [BiquadFilterNode](https://webaudio.github.io/web-audio-api/#biquadfilternode). BiquadFilter is similar to Filter but doesn't have the option to set the "rolloff" value. Component
constructor
constructor(frequency?: Frequency, type?: BiquadFilterType);
Parameter frequency
The cutoff frequency of the filter.
Parameter type
The type of filter.
constructor
constructor(options?: Partial<BiquadFilterOptions>);
property detune
readonly detune: Param<'cents'>;
A detune value, in cents, for the frequency.
property frequency
readonly frequency: Param<'frequency'>;
The frequency of the filter
property gain
readonly gain: Param<'decibels'>;
The gain of the filter. Its value is in dB units. The gain is only used for lowshelf, highshelf, and peaking filters.
property input
readonly input: BiquadFilterNode;
property name
readonly name: string;
property output
readonly output: BiquadFilterNode;
property Q
readonly Q: Param<'number'>;
The Q factor of the filter. For lowpass and highpass filters the Q value is interpreted to be in dB. For these filters the nominal range is [−𝑄𝑙𝑖𝑚,𝑄𝑙𝑖𝑚] where 𝑄𝑙𝑖𝑚 is the largest value for which 10𝑄/20 does not overflow. This is approximately 770.63678. For the bandpass, notch, allpass, and peaking filters, this value is a linear value. The value is related to the bandwidth of the filter and hence should be a positive value. The nominal range is [0,3.4028235𝑒38], the upper limit being the most-positive-single-float. This is not used for the lowshelf and highshelf filters.
property type
type: BiquadFilterType;
The type of this BiquadFilterNode. For a complete list of types and their attributes, see the [Web Audio API](https://webaudio.github.io/web-audio-api/#dom-biquadfiltertype-lowpass)
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => BiquadFilterOptions;
method getFrequencyResponse
getFrequencyResponse: (len?: number) => Float32Array;
Get the frequency response curve. This curve represents how the filter responses to frequencies between 20hz-20khz.
Parameter len
The number of values to return The frequency response curve between 20-20kHz
class BitCrusher
class BitCrusher extends Effect<BitCrusherOptions> {}
BitCrusher down-samples the incoming signal to a different bit depth. Lowering the bit depth of the signal creates distortion. Read more about BitCrushing on [Wikipedia](https://en.wikipedia.org/wiki/Bitcrusher).
Example 1
// initialize crusher and route a synth through it const crusher = new Tone.BitCrusher(4).toDestination(); const synth = new Tone.Synth().connect(crusher); synth.triggerAttackRelease("C2", 2);
Effect
constructor
constructor(bits?: number);
constructor
constructor(options?: Partial<BitCrusherWorkletOptions>);
property bits
readonly bits: Param<'positive'>;
The bit depth of the effect 1 16
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => BitCrusherOptions;
class Channel
class Channel extends ToneAudioNode<ChannelOptions> {}
constructor
constructor(volume?: number, pan?: number);
Parameter volume
The output volume.
Parameter pan
the initial pan
constructor
constructor(options?: Partial<ChannelOptions>);
property input
readonly input: InputNode;
property mute
mute: boolean;
Mute/unmute the volume
property muted
readonly muted: boolean;
If the current instance is muted, i.e. another instance is soloed, or the channel is muted
property name
readonly name: string;
property output
readonly output: OutputNode;
property pan
readonly pan: Param<'audioRange'>;
The L/R panning control. -1 = hard left, 1 = hard right. -1 1
property solo
solo: boolean;
property volume
readonly volume: Param<'decibels'>;
The volume control in decibels.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ChannelOptions;
method receive
receive: (name: string) => this;
Receive audio from a channel which was connected with send.
Parameter name
The channel name to receive audio from.
method send
send: (name: string, volume?: Decibels) => Gain<'decibels'>;
Send audio to another channel using a string.
send
is a lot like connect, except it uses a string instead of an object. This can be useful in large applications to decouple sections since send and receive can be invoked separately in order to connect an objectParameter name
The channel name to send the audio
Parameter volume
The amount of the signal to send. Defaults to 0db, i.e. send the entire signal
Returns
Returns the gain node of this connection.
class Chebyshev
class Chebyshev extends Effect<ChebyshevOptions> {}
Chebyshev is a waveshaper which is good for making different types of distortion sounds. Note that odd orders sound very different from even ones, and order = 1 is no change. Read more at [music.columbia.edu](http://music.columbia.edu/cmc/musicandcomputers/chapter4/04_06.php).
Example 1
// create a new cheby const cheby = new Tone.Chebyshev(50).toDestination(); // create a monosynth connected to our cheby const synth = new Tone.MonoSynth().connect(cheby); synth.triggerAttackRelease("C2", 0.4); Effect
constructor
constructor(order?: number);
Parameter order
The order of the chebyshev polynomial. Normal range between 1-100.
constructor
constructor(options?: Partial<ChebyshevOptions>);
property name
readonly name: string;
property order
order: number;
The order of the Chebyshev polynomial which creates the equation which is applied to the incoming signal through a Tone.WaveShaper. Must be an integer. The equations are in the form:
order 2: 2x^2 + 1order 3: 4x^3 + 3x1 100
property oversample
oversample: OverSampleType;
The oversampling of the effect. Can either be "none", "2x" or "4x".
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ChebyshevOptions;
class Chorus
class Chorus extends StereoFeedbackEffect<ChorusOptions> {}
Chorus is a stereo chorus effect composed of a left and right delay with an LFO applied to the delayTime of each channel. When feedback is set to a value larger than 0, you also get Flanger-type effects. Inspiration from [Tuna.js](https://github.com/Dinahmoe/tuna/blob/master/tuna.js). Read more on the chorus effect on [Sound On Sound](http://www.soundonsound.com/sos/jun04/articles/synthsecrets.htm).
Example 1
const chorus = new Tone.Chorus(4, 2.5, 0.5).toDestination().start(); const synth = new Tone.PolySynth().connect(chorus); synth.triggerAttackRelease(["C3", "E3", "G3"], "8n");
Effect
constructor
constructor(frequency?: Frequency, delayTime?: number, depth?: number);
Parameter frequency
The frequency of the LFO.
Parameter delayTime
The delay of the chorus effect in ms.
Parameter depth
The depth of the chorus.
constructor
constructor(options?: Partial<ChorusOptions>);
property delayTime
delayTime: number;
The delayTime in milliseconds of the chorus. A larger delayTime will give a more pronounced effect. Nominal range a delayTime is between 2 and 20ms.
property depth
depth: number;
The depth of the effect. A depth of 1 makes the delayTime modulate between 0 and 2*delayTime (centered around the delayTime).
property frequency
readonly frequency: Signal<'frequency'>;
The frequency of the LFO which modulates the delayTime.
property name
readonly name: string;
property spread
spread: number;
Amount of stereo spread. When set to 0, both LFO's will be panned centrally. When set to 180, LFO's will be panned hard left and right respectively.
property type
type: ToneOscillatorType;
The oscillator type of the LFO.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ChorusOptions;
method start
start: (time?: Time) => this;
Start the effect.
method stop
stop: (time?: Time) => this;
Stop the lfo
method sync
sync: () => this;
Sync the filter to the transport.
See Also
method unsync
unsync: () => this;
Unsync the filter from the transport.
class Clock
class Clock<TypeName extends 'bpm' | 'hertz' = 'hertz'> extends ToneWithContext<ClockOptions> implements Emitter<ClockEvent> {}
A sample accurate clock which provides a callback at the given rate. While the callback is not sample-accurate (it is still susceptible to loose JS timing), the time passed in as the argument to the callback is precise. For most applications, it is better to use Tone.Transport instead of the Clock by itself since you can synchronize multiple callbacks.
Example 1
// the callback will be invoked approximately once a second // and will print the time exactly once a second apart. const clock = new Tone.Clock(time => { console.log(time); }, 1); clock.start(); Core
constructor
constructor(callback?: ClockCallback, frequency?: Frequency);
Parameter callback
The callback to be invoked with the time of the audio event
Parameter frequency
The rate of the callback
constructor
constructor(options: Partial<ClockOptions>);
property callback
callback: ClockCallback;
The callback function to invoke at the scheduled tick.
property emit
emit: (event: any, ...args: any[]) => this;
property frequency
frequency: TickSignal<TypeName>;
The rate the callback function should be invoked.
property name
readonly name: string;
property off
off: (event: ClockEvent, callback?: (...args: any[]) => void) => this;
property on
on: (event: ClockEvent, callback: (...args: any[]) => void) => this;
property once
once: (event: ClockEvent, callback: (...args: any[]) => void) => this;
property seconds
seconds: number;
The time since ticks=0 that the Clock has been running. Accounts for tempo curves
property state
readonly state: PlaybackState;
Returns the playback state of the source, either "started", "stopped" or "paused".
property ticks
ticks: number;
The number of times the callback was invoked. Starts counting at 0 and increments after the callback was invoked.
method dispose
dispose: () => this;
Clean up
method getDefaults
static getDefaults: () => ClockOptions;
method getSecondsAtTime
getSecondsAtTime: (time: Time) => Seconds;
Return the elapsed seconds at the given time.
Parameter time
When to get the elapsed seconds The number of elapsed seconds
method getStateAtTime
getStateAtTime: (time: Time) => PlaybackState;
Returns the scheduled state at the given time.
Parameter time
The time to query. The name of the state input in setStateAtTime.
Example 1
const clock = new Tone.Clock(); clock.start("+0.1"); clock.getStateAtTime("+0.1"); // returns "started"
method getTicksAtTime
getTicksAtTime: (time?: Time) => Ticks;
Get the clock's ticks at the given time.
Parameter time
When to get the tick value The tick value at the given time.
method getTimeOfTick
getTimeOfTick: (tick: Ticks, before?: number) => Seconds;
Get the time of the given tick. The second argument is when to test before. Since ticks can be set (with setTicksAtTime) there may be multiple times for a given tick value.
Parameter tick
The tick number.
Parameter before
When to measure the tick value from. The time of the tick
method nextTickTime
nextTickTime: (offset: Ticks, when: Time) => Seconds;
Get the time of the next tick
Parameter offset
The tick number.
method pause
pause: (time?: Time) => this;
Pause the clock. Pausing does not reset the tick counter.
Parameter time
The time when the clock should stop.
method setTicksAtTime
setTicksAtTime: (ticks: Ticks, time: Time) => this;
Set the clock's ticks at the given time.
Parameter ticks
The tick value to set
Parameter time
When to set the tick value
method start
start: (time?: Time, offset?: Ticks) => this;
Start the clock at the given time. Optionally pass in an offset of where to start the tick counter from.
Parameter time
The time the clock should start
Parameter offset
Where the tick counter starts counting from.
method stop
stop: (time?: Time) => this;
Stop the clock. Stopping the clock resets the tick counter to 0.
Parameter time
The time when the clock should stop.
Example 1
const clock = new Tone.Clock(time => { console.log(time); }, 1); clock.start(); // stop the clock after 10 seconds clock.stop("+10");
class Compressor
class Compressor extends ToneAudioNode<CompressorOptions> {}
Compressor is a thin wrapper around the Web Audio [DynamicsCompressorNode](http://webaudio.github.io/web-audio-api/#the-dynamicscompressornode-interface). Compression reduces the volume of loud sounds or amplifies quiet sounds by narrowing or "compressing" an audio signal's dynamic range. Read more on [Wikipedia](https://en.wikipedia.org/wiki/Dynamic_range_compression).
Example 1
const comp = new Tone.Compressor(-30, 3); Component
constructor
constructor(threshold?: number, ratio?: number);
Parameter threshold
The value above which the compression starts to be applied.
Parameter ratio
The gain reduction ratio.
constructor
constructor(options?: Partial<CompressorOptions>);
property attack
readonly attack: Param<'time'>;
The amount of time (in seconds) to reduce the gain by 10dB. 0 1
property input
readonly input: DynamicsCompressorNode;
property knee
readonly knee: Param<'decibels'>;
A decibel value representing the range above the threshold where the curve smoothly transitions to the "ratio" portion. 0 40
property name
readonly name: string;
property output
readonly output: DynamicsCompressorNode;
property ratio
readonly ratio: Param<'positive'>;
The amount of dB change in input for a 1 dB change in output. 1 20
property reduction
readonly reduction: number;
A read-only decibel value for metering purposes, representing the current amount of gain reduction that the compressor is applying to the signal. If fed no signal the value will be 0 (no gain reduction).
property release
readonly release: Param<'time'>;
The amount of time (in seconds) to increase the gain by 10dB. 0 1
property threshold
readonly threshold: Param<'decibels'>;
The decibel value above which the compression will start taking effect. -100 0
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => CompressorOptions;
class Context
class Context extends BaseContext {}
Wrapper around the native AudioContext. Core
constructor
constructor(context?: AnyAudioContext);
constructor
constructor(options?: Partial<ContextOptions>);
property clockSource
clockSource: TickerClockSource;
What the source of the clock is, either "worker" (default), "timeout", or "offline" (none).
property currentTime
readonly currentTime: number;
The current time in seconds of the AudioContext.
property destination
destination: Destination;
A reference to the Context's destination node.
property draw
draw: Draw;
This is the Draw object for the context which is useful for synchronizing the draw frame with the Tone.js clock.
property isOffline
readonly isOffline: boolean;
Indicates if the context is an OfflineAudioContext or an AudioContext
property latencyHint
readonly latencyHint: number | AudioContextLatencyCategory;
The type of playback, which affects tradeoffs between audio output latency and responsiveness. In addition to setting the value in seconds, the latencyHint also accepts the strings "interactive" (prioritizes low latency), "playback" (prioritizes sustained playback), "balanced" (balances latency and performance).
Example 1
// prioritize sustained playback const context = new Tone.Context({ latencyHint: "playback" }); // set this context as the global Context Tone.setContext(context); // the global context is gettable with Tone.getContext() console.log(Tone.getContext().latencyHint);
property listener
listener: Listener;
The listener
property lookAhead
lookAhead: number;
The amount of time into the future events are scheduled. Giving Web Audio a short amount of time into the future to schedule events can reduce clicks and improve performance. This value can be set to 0 to get the lowest latency. Adjusting this value also affects the updateInterval.
property name
readonly name: string;
property rawContext
readonly rawContext: AnyAudioContext;
The unwrapped AudioContext or OfflineAudioContext
property sampleRate
readonly sampleRate: number;
The current time in seconds of the AudioContext.
property state
readonly state: AudioContextState;
The current time in seconds of the AudioContext.
property transport
transport: Transport;
There is only one Transport per Context. It is created on initialization.
property updateInterval
updateInterval: number;
How often the interval callback is invoked. This number corresponds to how responsive the scheduling can be. Setting to 0 will result in the lowest practial interval based on context properties. context.updateInterval + context.lookAhead gives you the total latency between scheduling an event and hearing it.
method addAudioWorkletModule
addAudioWorkletModule: (url: string) => Promise<void>;
Add an AudioWorkletProcessor module
Parameter url
The url of the module
method clearInterval
clearInterval: (id: number) => this;
Clear the function scheduled by setInterval
method clearTimeout
clearTimeout: (id: number) => this;
Clears a previously scheduled timeout with Tone.context.setTimeout
Parameter id
The ID returned from setTimeout
method close
close: () => Promise<void>;
Close the context. Once closed, the context can no longer be used and any AudioNodes created from the context will be silent.
method createAnalyser
createAnalyser: () => AnalyserNode;
method createAudioWorkletNode
createAudioWorkletNode: ( name: string, options?: Partial<AudioWorkletNodeOptions>) => AudioWorkletNode;
Create an audio worklet node from a name and options. The module must first be loaded using addAudioWorkletModule.
method createBiquadFilter
createBiquadFilter: () => BiquadFilterNode;
method createBuffer
createBuffer: ( numberOfChannels: number, length: number, sampleRate: number) => AudioBuffer;
method createBufferSource
createBufferSource: () => AudioBufferSourceNode;
method createChannelMerger
createChannelMerger: (numberOfInputs?: number | undefined) => ChannelMergerNode;
method createChannelSplitter
createChannelSplitter: ( numberOfOutputs?: number | undefined) => ChannelSplitterNode;
method createConstantSource
createConstantSource: () => ConstantSourceNode;
method createConvolver
createConvolver: () => ConvolverNode;
method createDelay
createDelay: (maxDelayTime?: number | undefined) => DelayNode;
method createDynamicsCompressor
createDynamicsCompressor: () => DynamicsCompressorNode;
method createGain
createGain: () => GainNode;
method createIIRFilter
createIIRFilter: ( feedForward: number[] | Float32Array, feedback: number[] | Float32Array) => IIRFilterNode;
method createMediaElementSource
createMediaElementSource: ( element: HTMLMediaElement) => MediaElementAudioSourceNode;
method createMediaStreamDestination
createMediaStreamDestination: () => MediaStreamAudioDestinationNode;
method createMediaStreamSource
createMediaStreamSource: (stream: MediaStream) => MediaStreamAudioSourceNode;
method createOscillator
createOscillator: () => OscillatorNode;
method createPanner
createPanner: () => PannerNode;
method createPeriodicWave
createPeriodicWave: ( real: number[] | Float32Array, imag: number[] | Float32Array, constraints?: PeriodicWaveConstraints | undefined) => PeriodicWave;
method createStereoPanner
createStereoPanner: () => StereoPannerNode;
method createWaveShaper
createWaveShaper: () => WaveShaperNode;
method decodeAudioData
decodeAudioData: (audioData: ArrayBuffer) => Promise<AudioBuffer>;
method dispose
dispose: () => this;
Clean up. Also closes the audio context.
method getConstant
getConstant: (val: number) => AudioBufferSourceNode;
**Internal** Generate a looped buffer at some constant value.
method getDefaults
static getDefaults: () => ContextOptions;
method immediate
immediate: () => Seconds;
The current audio context time without the lookAhead. In most cases it is better to use now instead of immediate since with now the lookAhead is applied equally to _all_ components including internal components, to making sure that everything is scheduled in sync. Mixing now and immediate can cause some timing issues. If no lookAhead is desired, you can set the lookAhead to
0
.
method now
now: () => Seconds;
The current audio context time plus a short lookAhead.
Example 1
setInterval(() => { console.log("now", Tone.now()); }, 100);
method resume
resume: () => Promise<void>;
Starts the audio context from a suspended state. This is required to initially start the AudioContext.
See Also
method setInterval
setInterval: (fn: (...args: any[]) => void, interval: Seconds) => number;
Adds a repeating event to the context's callback clock
method setTimeout
setTimeout: (fn: (...args: any[]) => void, timeout: Seconds) => number;
A setTimeout which is guaranteed by the clock source. Also runs in the offline context.
Parameter fn
The callback to invoke
Parameter timeout
The timeout in seconds
Returns
ID to use when invoking Context.clearTimeout
method workletsAreReady
protected workletsAreReady: () => Promise<void>;
Returns a promise which resolves when all of the worklets have been loaded on this context
class Convolver
class Convolver extends ToneAudioNode<ConvolverOptions> {}
Convolver is a wrapper around the Native Web Audio [ConvolverNode](http://webaudio.github.io/web-audio-api/#the-convolvernode-interface). Convolution is useful for reverb and filter emulation. Read more about convolution reverb on [Wikipedia](https://en.wikipedia.org/wiki/Convolution_reverb).
Example 1
// initializing the convolver with an impulse response const convolver = new Tone.Convolver("./path/to/ir.wav").toDestination(); Component
constructor
constructor(url?: string | ToneAudioBuffer | AudioBuffer, onload?: () => void);
Parameter url
The URL of the impulse response or the ToneAudioBuffer containing the impulse response.
Parameter onload
The callback to invoke when the url is loaded.
constructor
constructor(options?: Partial<ConvolverOptions>);
property buffer
buffer: ToneAudioBuffer;
The convolver's buffer
property input
readonly input: Gain<'gain'>;
property name
readonly name: string;
property normalize
normalize: boolean;
The normalize property of the ConvolverNode interface is a boolean that controls whether the impulse response from the buffer will be scaled by an equal-power normalization when the buffer attribute is set, or not.
property output
readonly output: Gain<'gain'>;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ConvolverOptions;
method load
load: (url: string) => Promise<void>;
Load an impulse response url as an audio buffer. Decodes the audio asynchronously and invokes the callback once the audio buffer loads.
Parameter url
The url of the buffer to load. filetype support depends on the browser.
class CrossFade
class CrossFade extends ToneAudioNode<CrossFadeOptions> {}
Tone.Crossfade provides equal power fading between two inputs. More on crossfading technique [here](https://en.wikipedia.org/wiki/Fade_(audio_engineering)#Crossfading).
+---------++> input a +>--++-----------+ +---------------------+ | | || 1s signal +>--> stereoPannerNode L +>----> gain | |+-----------+ | | +---------+ |+-> pan R +>-+ | +--------+| +---------------------+ | +---> output +>+------+ | | +---------+ | +--------+| fade +>----+ | +> input b +>--++------+ | | |+--> gain |+---------+Example 1
const crossFade = new Tone.CrossFade().toDestination(); // connect two inputs Tone.to a/b const inputA = new Tone.Oscillator(440, "square").connect(crossFade.a).start(); const inputB = new Tone.Oscillator(440, "sine").connect(crossFade.b).start(); // use the fade to control the mix between the two crossFade.fade.value = 0.5; Component
constructor
constructor(fade?: number);
Parameter fade
The initial fade value [0, 1].
constructor
constructor(options?: Partial<CrossFadeOptions>);
property a
readonly a: Gain<'gain'>;
The input which is at full level when fade = 0
property b
readonly b: Gain<'gain'>;
The input which is at full level when fade = 1
property fade
readonly fade: Signal<'normalRange'>;
The mix between the two inputs. A fade value of 0 will output 100% crossFade.a and a value of 1 will output 100% crossFade.b.
property input
readonly input: undefined;
CrossFade has no input, you must choose either
a
orb
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
The output is a mix between
a
andb
at the ratio offade
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => CrossFadeOptions;
class DCMeter
class DCMeter extends MeterBase<DCMeterOptions> {}
DCMeter gets the raw value of the input signal at the current time.
Example 1
const meter = new Tone.DCMeter(); const mic = new Tone.UserMedia(); mic.open(); // connect mic to the meter mic.connect(meter); // the current level of the mic const level = meter.getValue(); Component
See Also
constructor
constructor(options?: Partial<ToneWithContextOptions>);
property name
readonly name: string;
method getValue
getValue: () => number;
Get the signal value of the incoming signal
class Delay
class Delay extends ToneAudioNode<DelayOptions> {}
Wrapper around Web Audio's native [DelayNode](http://webaudio.github.io/web-audio-api/#the-delaynode-interface). Core
Example 1
return Tone.Offline(() => { const delay = new Tone.Delay(0.1).toDestination(); // connect the signal to both the delay and the destination const pulse = new Tone.PulseOscillator().connect(delay).toDestination(); // start and stop the pulse pulse.start(0).stop(0.01); }, 0.5, 1);
constructor
constructor(delayTime?: Time, maxDelay?: Time);
Parameter delayTime
The delay applied to the incoming signal.
Parameter maxDelay
The maximum delay time.
constructor
constructor(options?: Partial<DelayOptions>);
property delayTime
readonly delayTime: Param<'time'>;
The amount of time the incoming signal is delayed.
Example 1
const delay = new Tone.Delay().toDestination(); // modulate the delayTime between 0.1 and 1 seconds const delayLFO = new Tone.LFO(0.5, 0.1, 1).start().connect(delay.delayTime); const pulse = new Tone.PulseOscillator().connect(delay).start(); // the change in delayTime causes the pitch to go up and down
property input
readonly input: DelayNode;
property maxDelay
readonly maxDelay: number;
The maximum delay time. This cannot be changed after the value is passed into the constructor.
property name
readonly name: string;
property output
readonly output: DelayNode;
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => DelayOptions;
class Distortion
class Distortion extends Effect<DistortionOptions> {}
A simple distortion effect using Tone.WaveShaper. Algorithm from [this stackoverflow answer](http://stackoverflow.com/a/22313408). Read more about distortion on [Wikipedia] (https://en.wikipedia.org/wiki/Distortion_(music)).
Example 1
const dist = new Tone.Distortion(0.8).toDestination(); const fm = new Tone.FMSynth().connect(dist); fm.triggerAttackRelease("A1", "8n"); Effect
constructor
constructor(distortion?: number);
Parameter distortion
The amount of distortion (nominal range of 0-1)
constructor
constructor(options?: Partial<DistortionOptions>);
property distortion
distortion: number;
The amount of distortion. Nominal range is between 0 and 1.
property name
readonly name: string;
property oversample
oversample: OverSampleType;
The oversampling of the effect. Can either be "none", "2x" or "4x".
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => DistortionOptions;
class DuoSynth
class DuoSynth extends Monophonic<DuoSynthOptions> {}
DuoSynth is a monophonic synth composed of two MonoSynths run in parallel with control over the frequency ratio between the two voices and vibrato effect.
Example 1
const duoSynth = new Tone.DuoSynth().toDestination(); duoSynth.triggerAttackRelease("C4", "2n"); Instrument
constructor
constructor(options?: RecursivePartial<DuoSynthOptions>);
property detune
readonly detune: Signal<'cents'>;
property frequency
readonly frequency: Signal<'frequency'>;
property harmonicity
harmonicity: Signal<'positive'>;
Harmonicity is the ratio between the two voices. A harmonicity of 1 is no change. Harmonicity = 2 means a change of an octave.
Example 1
const duoSynth = new Tone.DuoSynth().toDestination(); duoSynth.triggerAttackRelease("C4", "2n"); // pitch voice1 an octave below voice0 duoSynth.harmonicity.value = 0.5;
property name
readonly name: string;
property vibratoAmount
vibratoAmount: Param<'normalRange'>;
The amount of vibrato
property vibratoRate
vibratoRate: Signal<'frequency'>;
the vibrato frequency
property voice0
readonly voice0: MonoSynth;
the first voice
property voice1
readonly voice1: MonoSynth;
the second voice
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => DuoSynthOptions;
method getLevelAtTime
getLevelAtTime: (time: Time) => NormalRange;
class Emitter
class Emitter<EventType extends string = string> extends Tone {}
Emitter gives classes which extend it the ability to listen for and emit events. Inspiration and reference from Jerome Etienne's [MicroEvent](https://github.com/jeromeetienne/microevent.js). MIT (c) 2011 Jerome Etienne. Core
property name
readonly name: string;
method dispose
dispose: () => this;
Clean up
method emit
emit: (event: EventType, ...args: any[]) => this;
Invoke all of the callbacks bound to the event with any arguments passed in.
Parameter event
The name of the event.
Parameter args
The arguments to pass to the functions listening.
method mixin
static mixin: (constr: any) => void;
Add Emitter functions (on/off/emit) to the object
method off
off: (event: EventType, callback?: (...args: any[]) => void) => this;
Remove the event listener.
Parameter event
The event to stop listening to.
Parameter callback
The callback which was bound to the event with Emitter.on. If no callback is given, all callbacks events are removed.
method on
on: (event: EventType, callback: (...args: any[]) => void) => this;
Bind a callback to a specific event.
Parameter event
The name of the event to listen for.
Parameter callback
The callback to invoke when the event is emitted
method once
once: (event: EventType, callback: (...args: any[]) => void) => this;
Bind a callback which is only invoked once
Parameter event
The name of the event to listen for.
Parameter callback
The callback to invoke when the event is emitted
class Envelope
class Envelope extends ToneAudioNode<EnvelopeOptions> {}
Envelope is an [ADSR](https://en.wikipedia.org/wiki/Synthesizer#ADSR_envelope) envelope generator. Envelope outputs a signal which can be connected to an AudioParam or Tone.Signal.
/\/ \/ \/ \/ \___________/ \/ \/ \/ \Example 1
return Tone.Offline(() => { const env = new Tone.Envelope({ attack: 0.1, decay: 0.2, sustain: 0.5, release: 0.8, }).toDestination(); env.triggerAttackRelease(0.5); }, 1.5, 1); Component
constructor
constructor(attack?: Time, decay?: Time, sustain?: number, release?: Time);
Parameter attack
The amount of time it takes for the envelope to go from 0 to it's maximum value.
Parameter decay
The period of time after the attack that it takes for the envelope to fall to the sustain value. Value must be greater than 0.
Parameter sustain
The percent of the maximum value that the envelope rests at until the release is triggered.
Parameter release
The amount of time after the release is triggered it takes to reach 0. Value must be greater than 0.
constructor
constructor(options?: Partial<EnvelopeOptions>);
property attack
attack: Time;
When triggerAttack is called, the attack time is the amount of time it takes for the envelope to reach it's maximum value.
/\/X \/XX \/XXX \/XXXX \___________/XXXXX \/XXXXXX \/XXXXXXX \/XXXXXXXX \0 2
property attackCurve
attackCurve: EnvelopeCurve;
The shape of the attack. Can be any of these strings: * "linear" * "exponential" * "sine" * "cosine" * "bounce" * "ripple" * "step"
Can also be an array which describes the curve. Values in the array are evenly subdivided and linearly interpolated over the duration of the attack.
Example 1
return Tone.Offline(() => { const env = new Tone.Envelope(0.4).toDestination(); env.attackCurve = "linear"; env.triggerAttack(); }, 1, 1);
property decay
decay: Time;
After the attack portion of the envelope, the value will fall over the duration of the decay time to it's sustain value.
/\/ X\/ XX\/ XXX\/ XXXX\___________/ XXXXX \/ XXXXX \/ XXXXX \/ XXXXX \0 2
property decayCurve
decayCurve: EnvelopeCurve;
The shape of the decay either "linear" or "exponential"
Example 1
return Tone.Offline(() => { const env = new Tone.Envelope({ sustain: 0.1, decay: 0.5 }).toDestination(); env.decayCurve = "linear"; env.triggerAttack(); }, 1, 1);
property input
input: InputNode;
Envelope has no input
property name
readonly name: string;
property output
output: OutputNode;
The output signal of the envelope
property release
release: Time;
After triggerRelease is called, the envelope's value will fall to it's miminum value over the duration of the release time.
/\/ \/ \/ \/ \___________/ X\/ XX\/ XXX\/ XXXX\0 5
property releaseCurve
releaseCurve: EnvelopeCurve;
The shape of the release. See the attack curve types.
Example 1
return Tone.Offline(() => { const env = new Tone.Envelope({ release: 0.8 }).toDestination(); env.triggerAttack(); // release curve could also be defined by an array env.releaseCurve = [1, 0.3, 0.4, 0.2, 0.7, 0]; env.triggerRelease(0.2); }, 1, 1);
property sustain
sustain: number;
The sustain value is the value which the envelope rests at after triggerAttack is called, but before triggerRelease is invoked.
/\/ \/ \/ \/ \___________/ XXXXXXXXXXX\/ XXXXXXXXXXX \/ XXXXXXXXXXX \/ XXXXXXXXXXX \
property value
readonly value: number;
Read the current value of the envelope. Useful for synchronizing visual output to the envelope.
method asArray
asArray: (length?: number) => Promise<Float32Array>;
Render the envelope curve to an array of the given length. Good for visualizing the envelope curve. Rescales the duration of the envelope to fit the length.
method cancel
cancel: (after?: Time) => this;
Cancels all scheduled envelope changes after the given time.
method connect
connect: ( destination: InputNode, outputNumber?: number, inputNumber?: number) => this;
Connect the envelope to a destination node.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => EnvelopeOptions;
method getValueAtTime
getValueAtTime: (time: Time) => NormalRange;
Get the scheduled value at the given time. This will return the unconverted (raw) value.
Example 1
const env = new Tone.Envelope(0.5, 1, 0.4, 2); env.triggerAttackRelease(2); setInterval(() => console.log(env.getValueAtTime(Tone.now())), 100);
method triggerAttack
triggerAttack: (time?: Time, velocity?: NormalRange) => this;
Trigger the attack/decay portion of the ADSR envelope.
Parameter time
When the attack should start.
Parameter velocity
The velocity of the envelope scales the vales. number between 0-1
Example 1
const env = new Tone.AmplitudeEnvelope().toDestination(); const osc = new Tone.Oscillator().connect(env).start(); // trigger the attack 0.5 seconds from now with a velocity of 0.2 env.triggerAttack("+0.5", 0.2);
method triggerAttackRelease
triggerAttackRelease: ( duration: Time, time?: Time, velocity?: NormalRange) => this;
triggerAttackRelease is shorthand for triggerAttack, then waiting some duration, then triggerRelease.
Parameter duration
The duration of the sustain.
Parameter time
When the attack should be triggered.
Parameter velocity
The velocity of the envelope.
Example 1
const env = new Tone.AmplitudeEnvelope().toDestination(); const osc = new Tone.Oscillator().connect(env).start(); // trigger the release 0.5 seconds after the attack env.triggerAttackRelease(0.5);
method triggerRelease
triggerRelease: (time?: Time) => this;
Triggers the release of the envelope.
Parameter time
When the release portion of the envelope should start.
Example 1
const env = new Tone.AmplitudeEnvelope().toDestination(); const osc = new Tone.Oscillator({ type: "sawtooth" }).connect(env).start(); env.triggerAttack(); // trigger the release half a second after the attack env.triggerRelease("+0.5");
class EQ3
class EQ3 extends ToneAudioNode<EQ3Options> {}
EQ3 provides 3 equalizer bins: Low/Mid/High. Component
constructor
constructor(lowLevel?: number, midLevel?: number, highLevel?: number);
constructor
constructor(options: Partial<EQ3Options>);
property high
readonly high: Param<'decibels'>;
The gain in decibels of the high part
property highFrequency
readonly highFrequency: Signal<'frequency'>;
The mid/high crossover frequency.
property input
readonly input: MultibandSplit;
the input
property low
readonly low: Param<'decibels'>;
The gain in decibels of the low part
property lowFrequency
readonly lowFrequency: Signal<'frequency'>;
The low/mid crossover frequency.
property mid
readonly mid: Param<'decibels'>;
The gain in decibels of the mid part
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
the output
property Q
readonly Q: Signal<'positive'>;
The Q value for all of the filters.
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => EQ3Options;
class FatOscillator
class FatOscillator extends Source<FatOscillatorOptions> implements ToneOscillatorInterface {}
FatOscillator is an array of oscillators with detune spread between the oscillators
Example 1
const fatOsc = new Tone.FatOscillator("Ab3", "sawtooth", 40).toDestination().start(); Source
constructor
constructor(frequency?: Frequency, type?: ToneOscillatorType, spread?: number);
Parameter frequency
The oscillator's frequency.
Parameter type
The type of the oscillator.
Parameter spread
The detune spread between the oscillators.
constructor
constructor(options?: Partial<FatConstructorOptions>);
property baseType
baseType: OscillatorType;
property count
count: number;
The number of detuned oscillators. Must be an integer greater than 1.
Example 1
const fatOsc = new Tone.FatOscillator("C#3", "sawtooth").toDestination().start(); // use 4 sawtooth oscillators fatOsc.count = 4;
property detune
readonly detune: Signal<'cents'>;
property frequency
readonly frequency: Signal<'frequency'>;
property name
readonly name: string;
property partialCount
partialCount: number;
property partials
partials: number[];
property phase
phase: number;
property spread
spread: number;
The detune spread between the oscillators. If "count" is set to 3 oscillators and the "spread" is set to 40, the three oscillators would be detuned like this: [-20, 0, 20] for a total detune spread of 40 cents.
Example 1
const fatOsc = new Tone.FatOscillator().toDestination().start(); fatOsc.spread = 70;
property type
type: ToneOscillatorType;
The type of the oscillator
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => FatOscillatorOptions;
class FeedbackCombFilter
class FeedbackCombFilter extends ToneAudioWorklet<FeedbackCombFilterOptions> {}
Comb filters are basic building blocks for physical modeling. Read more about comb filters on [CCRMA's website](https://ccrma.stanford.edu/~jos/pasp/Feedback_Comb_Filters.html).
This comb filter is implemented with the AudioWorkletNode which allows it to have feedback delays less than the Web Audio processing block of 128 samples. There is a polyfill for browsers that don't yet support the AudioWorkletNode, but it will add some latency and have slower performance than the AudioWorkletNode. Component
constructor
constructor(delayTime?: Time, resonance?: number);
Parameter delayTime
The delay time of the filter.
Parameter resonance
The amount of feedback the filter has.
constructor
constructor(options?: RecursivePartial<FeedbackCombFilterOptions>);
property delayTime
readonly delayTime: Param<'time'>;
The amount of delay of the comb filter.
property input
readonly input: Gain<'gain'>;
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
property resonance
readonly resonance: Param<'normalRange'>;
The amount of feedback of the delayed signal.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => FeedbackCombFilterOptions;
The default parameters
method onReady
onReady: (node: AudioWorkletNode) => void;
class FeedbackDelay
class FeedbackDelay extends FeedbackEffect<FeedbackDelayOptions> {}
FeedbackDelay is a DelayNode in which part of output signal is fed back into the delay.
Parameter delayTime
The delay applied to the incoming signal.
Parameter feedback
The amount of the effected signal which is fed back through the delay.
Example 1
const feedbackDelay = new Tone.FeedbackDelay("8n", 0.5).toDestination(); const tom = new Tone.MembraneSynth({ octaves: 4, pitchDecay: 0.1 }).connect(feedbackDelay); tom.triggerAttackRelease("A2", "32n"); Effect
constructor
constructor(delayTime?: Time, feedback?: number);
constructor
constructor(options?: Partial<FeedbackDelayOptions>);
property delayTime
readonly delayTime: Param<'time'>;
The delayTime of the FeedbackDelay.
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => FeedbackDelayOptions;
class FFT
class FFT extends MeterBase<FFTOptions> {}
Get the current frequency data of the connected audio source using a fast Fourier transform. Read more about FFT algorithms on [Wikipedia] (https://en.wikipedia.org/wiki/Fast_Fourier_transform). Component
constructor
constructor(size?: number);
Parameter size
The size of the FFT. Value must be a power of two in the range 16 to 16384.
constructor
constructor(options?: Partial<FFTOptions>);
property name
readonly name: string;
property normalRange
normalRange: boolean;
If the output should be in decibels or normal range between 0-1. If
normalRange
is false, the output range will be the measured decibel value, otherwise the decibel value will be converted to the range of 0-1
property size
size: number;
The size of analysis. This must be a power of two in the range 16 to 16384. Determines the size of the array returned by getValue (i.e. the number of frequency bins). Large FFT sizes may be costly to compute.
property smoothing
smoothing: number;
0 represents no time averaging with the last analysis frame.
method getDefaults
static getDefaults: () => FFTOptions;
method getFrequencyOfIndex
getFrequencyOfIndex: (index: number) => Hertz;
Returns the frequency value in hertz of each of the indices of the FFT's getValue response.
Example 1
const fft = new Tone.FFT(32); console.log([0, 1, 2, 3, 4].map(index => fft.getFrequencyOfIndex(index)));
method getValue
getValue: () => Float32Array;
Gets the current frequency data from the connected audio source. Returns the frequency data of length size as a Float32Array of decibel values.
class Filter
class Filter extends ToneAudioNode<FilterOptions> {}
Tone.Filter is a filter which allows for all of the same native methods as the [BiquadFilterNode](http://webaudio.github.io/web-audio-api/#the-biquadfilternode-interface). Tone.Filter has the added ability to set the filter rolloff at -12 (default), -24 and -48.
Example 1
const filter = new Tone.Filter(1500, "highpass").toDestination(); filter.frequency.rampTo(20000, 10); const noise = new Tone.Noise().connect(filter).start(); Component
constructor
constructor( frequency?: Frequency, type?: BiquadFilterType, rolloff?: FilterRollOff);
Parameter frequency
The cutoff frequency of the filter.
Parameter type
The type of filter.
Parameter rolloff
The drop in decibels per octave after the cutoff frequency
constructor
constructor(options?: Partial<FilterOptions>);
property detune
readonly detune: Signal<'cents'>;
The detune parameter
property frequency
readonly frequency: Signal<'frequency'>;
The cutoff frequency of the filter.
property gain
readonly gain: Signal<'decibels'>;
The gain of the filter, only used in certain filter types
property input
readonly input: Gain<'gain'>;
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
property Q
readonly Q: Signal<'positive'>;
The Q or Quality of the filter
property rolloff
rolloff: FilterRollOff;
The rolloff of the filter which is the drop in db per octave. Implemented internally by cascading filters. Only accepts the values -12, -24, -48 and -96.
property type
type: BiquadFilterType;
The type of the filter. Types: "lowpass", "highpass", "bandpass", "lowshelf", "highshelf", "notch", "allpass", or "peaking".
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => FilterOptions;
method getFrequencyResponse
getFrequencyResponse: (len?: number) => Float32Array;
Get the frequency response curve. This curve represents how the filter responses to frequencies between 20hz-20khz.
Parameter len
The number of values to return The frequency response curve between 20-20kHz
class FMOscillator
class FMOscillator extends Source<FMOscillatorOptions> implements ToneOscillatorInterface {}
FMOscillator implements a frequency modulation synthesis
+-------------++---------------+ +-------------+ | Carrier Osc || Modulator Osc +>-------> GainNode | | +--->Output+---------------+ | +>----> frequency |+--> gain | +-------------+| +-------------++-----------------+ || modulationIndex +>--++-----------------+Example 1
return Tone.Offline(() => { const fmOsc = new Tone.FMOscillator({ frequency: 200, type: "square", modulationType: "triangle", harmonicity: 0.2, modulationIndex: 3 }).toDestination().start(); }, 0.1, 1); Source
constructor
constructor( frequency?: Frequency, type?: ToneOscillatorType, modulationType?: ToneOscillatorType);
Parameter frequency
The starting frequency of the oscillator.
Parameter type
The type of the carrier oscillator.
Parameter modulationType
The type of the modulator oscillator.
constructor
constructor(options?: Partial<FMConstructorOptions>);
property baseType
baseType: OscillatorType;
property detune
readonly detune: Signal<'cents'>;
property frequency
readonly frequency: Signal<'frequency'>;
property harmonicity
readonly harmonicity: Signal<'positive'>;
Harmonicity is the frequency ratio between the carrier and the modulator oscillators. A harmonicity of 1 gives both oscillators the same frequency. Harmonicity = 2 means a change of an octave.
Example 1
const fmOsc = new Tone.FMOscillator("D2").toDestination().start(); // pitch the modulator an octave below carrier fmOsc.harmonicity.value = 0.5;
property modulationIndex
readonly modulationIndex: Signal<'positive'>;
The modulation index which is in essence the depth or amount of the modulation. In other terms it is the ratio of the frequency of the modulating signal (mf) to the amplitude of the modulating signal (ma) -- as in ma/mf.
property modulationType
modulationType: ToneOscillatorType;
The type of the modulator oscillator
property name
readonly name: string;
property partialCount
partialCount: number;
property partials
partials: number[];
property phase
phase: number;
property type
type: ToneOscillatorType;
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => FMOscillatorOptions;
class FMSynth
class FMSynth extends ModulationSynth<FMSynthOptions> {}
FMSynth is composed of two Tone.Synths where one Tone.Synth modulates the frequency of a second Tone.Synth. A lot of spectral content can be explored using the modulationIndex parameter. Read more about frequency modulation synthesis on Sound On Sound: [Part 1](https://web.archive.org/web/20160403123704/http://www.soundonsound.com/sos/apr00/articles/synthsecrets.htm), [Part 2](https://web.archive.org/web/20160403115835/http://www.soundonsound.com/sos/may00/articles/synth.htm).
Example 1
const fmSynth = new Tone.FMSynth().toDestination(); fmSynth.triggerAttackRelease("C5", "4n");
Instrument
constructor
constructor(options?: RecursivePartial<FMSynthOptions>);
property modulationIndex
readonly modulationIndex: Multiply<'number'>;
The modulation index which essentially the depth or amount of the modulation. It is the ratio of the frequency of the modulating signal (mf) to the amplitude of the modulating signal (ma) -- as in ma/mf.
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => FMSynthOptions;
class Follower
class Follower extends ToneAudioNode<FollowerOptions> {}
Follower is a simple envelope follower. It's implemented by applying a lowpass filter to the absolute value of the incoming signal.
+-----+ +---------------+Input +--> Abs +----> OnePoleFilter +--> Output+-----+ +---------------+Component
constructor
constructor(smoothing?: Time);
Parameter smoothing
The rate of change of the follower.
constructor
constructor(options?: Partial<FollowerOptions>);
property input
readonly input: InputNode;
property name
readonly name: string;
property output
readonly output: OutputNode;
property smoothing
smoothing: Time;
The amount of time it takes a value change to arrive at the updated value.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => FollowerOptions;
class Freeverb
class Freeverb extends StereoEffect<FreeverbOptions> {}
Freeverb is a reverb based on [Freeverb](https://ccrma.stanford.edu/~jos/pasp/Freeverb.html). Read more on reverb on [Sound On Sound](https://web.archive.org/web/20160404083902/http://www.soundonsound.com:80/sos/feb01/articles/synthsecrets.asp). Freeverb is now implemented with an AudioWorkletNode which may result on performance degradation on some platforms. Consider using Reverb.
Example 1
const freeverb = new Tone.Freeverb().toDestination(); freeverb.dampening = 1000; // routing synth through the reverb const synth = new Tone.NoiseSynth().connect(freeverb); synth.triggerAttackRelease(0.05); Effect
constructor
constructor(roomSize?: number, dampening?: Frequency);
Parameter roomSize
Correlated to the decay time.
Parameter dampening
The cutoff frequency of a lowpass filter as part of the reverb.
constructor
constructor(options?: Partial<FreeverbOptions>);
property dampening
dampening: Frequency;
The amount of dampening of the reverberant signal.
property name
readonly name: string;
property roomSize
readonly roomSize: Signal<'normalRange'>;
The roomSize value between 0 and 1. A larger roomSize will result in a longer decay.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => FreeverbOptions;
class FrequencyClass
class FrequencyClass<Type extends number = Hertz> extends TimeClass< Type, FrequencyUnit> {}
Frequency is a primitive type for encoding Frequency values. Eventually all time values are evaluated to hertz using the
valueOf
method.Example 1
Tone.Frequency("C3"); // 261 Tone.Frequency(38, "midi"); Tone.Frequency("C3").transpose(4); Unit
property A4
static A4: number;
The [concert tuning pitch](https://en.wikipedia.org/wiki/Concert_pitch) which is used to generate all the other pitch values from notes. A4's values in Hertz.
property defaultUnits
readonly defaultUnits: FrequencyUnit;
property name
readonly name: string;
method ftom
static ftom: (frequency: Hertz) => MidiNote;
Convert a frequency value to a MIDI note.
Parameter frequency
The value to frequency value to convert.
method harmonize
harmonize: (intervals: Interval[]) => FrequencyClass[];
Takes an array of semitone intervals and returns an array of frequencies transposed by those intervals. Returns an array of Frequencies
Example 1
Tone.Frequency("A4").harmonize([0, 3, 7]); // ["A4", "C5", "E5"]
method mtof
static mtof: (midi: MidiNote) => Hertz;
Convert a MIDI note to frequency value.
Parameter midi
The midi number to convert. The corresponding frequency value
method toMidi
toMidi: () => MidiNote;
Return the value of the frequency as a MIDI note
Example 1
Tone.Frequency("C4").toMidi(); // 60
method toNote
toNote: () => Note;
Return the value of the frequency in Scientific Pitch Notation
Example 1
Tone.Frequency(69, "midi").toNote(); // "A4"
method toSeconds
toSeconds: () => Seconds;
Return the duration of one cycle in seconds.
method toTicks
toTicks: () => Ticks;
Return the duration of one cycle in ticks
method transpose
transpose: (interval: Interval) => FrequencyClass;
Transposes the frequency by the given number of semitones. A new transposed frequency
Example 1
Tone.Frequency("A4").transpose(3); // "C5"
class FrequencyEnvelope
class FrequencyEnvelope extends Envelope {}
FrequencyEnvelope is an Envelope which ramps between baseFrequency and octaves. It can also have an optional exponent to adjust the curve which it ramps.
Example 1
const oscillator = new Tone.Oscillator().toDestination().start(); const freqEnv = new Tone.FrequencyEnvelope({ attack: 0.2, baseFrequency: "C2", octaves: 4 }); freqEnv.connect(oscillator.frequency); freqEnv.triggerAttack(); Component
constructor
constructor(attack?: Time, decay?: Time, sustain?: number, release?: Time);
Parameter attack
the attack time in seconds
Parameter decay
the decay time in seconds
Parameter sustain
a percentage (0-1) of the full amplitude
Parameter release
the release time in seconds
constructor
constructor(options?: Partial<FrequencyEnvelopeOptions>);
property baseFrequency
baseFrequency: Frequency;
The envelope's minimum output value. This is the value which it starts at.
property exponent
exponent: number;
The envelope's exponent value.
property name
readonly name: string;
property octaves
octaves: number;
The number of octaves above the baseFrequency that the envelope will scale to.
method dispose
dispose: () => this;
Clean up
method getDefaults
static getDefaults: () => FrequencyEnvelopeOptions;
class FrequencyShifter
class FrequencyShifter extends Effect<FrequencyShifterOptions> {}
FrequencyShifter can be used to shift all frequencies of a signal by a fixed amount. The amount can be changed at audio rate and the effect is applied in real time. The frequency shifting is implemented with a technique called single side band modulation using a ring modulator. Note: Contrary to pitch shifting, all frequencies are shifted by the same amount, destroying the harmonic relationship between them. This leads to the classic ring modulator timbre distortion. The algorithm will produces some aliasing towards the high end, especially if your source material contains a lot of high frequencies. Unfortunatelly the webaudio API does not support resampling buffers in real time, so it is not possible to fix it properly. Depending on the use case it might be an option to low pass filter your input before frequency shifting it to get ride of the aliasing. You can find a very detailed description of the algorithm here: https://larzeitlin.github.io/RMFS/
Example 1
const input = new Tone.Oscillator(230, "sawtooth").start(); const shift = new Tone.FrequencyShifter(42).toDestination(); input.connect(shift); Effect
constructor
constructor(frequency?: Frequency);
Parameter frequency
The incoming signal is shifted by this frequency value.
constructor
constructor(options?: Partial<FrequencyShifterOptions>);
property frequency
readonly frequency: Signal<'frequency'>;
The ring modulators carrier frequency. This frequency determines by how many Hertz the input signal will be shifted up or down. Default is 0.
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => FrequencyShifterOptions;
class Gain
class Gain< TypeName extends 'gain' | 'decibels' | 'normalRange' = 'gain'> extends ToneAudioNode<GainOptions<TypeName>> {}
A thin wrapper around the Native Web Audio GainNode. The GainNode is a basic building block of the Web Audio API and is useful for routing audio and adjusting gains. Core
Example 1
return Tone.Offline(() => { const gainNode = new Tone.Gain(0).toDestination(); const osc = new Tone.Oscillator(30).connect(gainNode).start(); gainNode.gain.rampTo(1, 0.1); gainNode.gain.rampTo(0, 0.4, 0.2); }, 0.7, 1);
constructor
constructor(gain?: number, units?: 'gain' | 'decibels' | 'normalRange');
Parameter gain
The initial gain of the GainNode
Parameter units
The units of the gain parameter.
constructor
constructor(options?: Partial<GainOptions<TypeName>>);
property gain
readonly gain: Param<TypeName>;
The gain parameter of the gain node.
Example 1
const gainNode = new Tone.Gain(0).toDestination(); const osc = new Tone.Oscillator().connect(gainNode).start(); gainNode.gain.rampTo(1, 0.1); gainNode.gain.rampTo(0, 2, "+0.5");
property input
readonly input: GainNode;
property name
readonly name: string;
property output
readonly output: GainNode;
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => GainOptions<any>;
class GainToAudio
class GainToAudio extends SignalOperator<ToneAudioNodeOptions> {}
GainToAudio converts an input in NormalRange [0,1] to AudioRange [-1,1].
See Also
AudioToGain. Signal
class Gate
class Gate extends ToneAudioNode<GateOptions> {}
Gate only passes a signal through when the incoming signal exceeds a specified threshold. It uses Follower to follow the ampltiude of the incoming signal and compares it to the threshold value using GreaterThan.
Example 1
const gate = new Tone.Gate(-30, 0.2).toDestination(); const mic = new Tone.UserMedia().connect(gate); // the gate will only pass through the incoming // signal when it's louder than -30db Component
constructor
constructor(threshold?: number, smoothing?: Time);
Parameter threshold
The threshold above which the gate will open.
Parameter smoothing
The follower's smoothing time
constructor
constructor(options?: Partial<GateOptions>);
property input
readonly input: ToneAudioNode<ToneWithContextOptions>;
property name
readonly name: string;
property output
readonly output: ToneAudioNode<ToneWithContextOptions>;
property smoothing
smoothing: Time;
The attack/decay speed of the gate.
See Also
property threshold
threshold: number;
The threshold of the gate in decibels
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => GateOptions;
class GrainPlayer
class GrainPlayer extends Source<GrainPlayerOptions> {}
GrainPlayer implements [granular synthesis](https://en.wikipedia.org/wiki/Granular_synthesis). Granular Synthesis enables you to adjust pitch and playback rate independently. The grainSize is the amount of time each small chunk of audio is played for and the overlap is the amount of crossfading transition time between successive grains. Source
constructor
constructor(url?: string | ToneAudioBuffer | AudioBuffer, onload?: () => void);
Parameter url
Either the AudioBuffer or the url from which to load the AudioBuffer
Parameter onload
The function to invoke when the buffer is loaded.
constructor
constructor(options?: Partial<GrainPlayerOptions>);
property buffer
buffer: ToneAudioBuffer;
The audio buffer belonging to the player.
property detune
detune: number;
Adjust the pitch independently of the playbackRate.
property grainSize
grainSize: Time;
The size of each chunk of audio that the buffer is chopped into and played back at.
property loaded
readonly loaded: boolean;
If all the buffer is loaded
property loop
loop: boolean;
If the buffer should loop back to the loopStart when completed
property loopEnd
loopEnd: Time;
The loop end time.
property loopStart
loopStart: Time;
The loop start time.
property name
readonly name: string;
property overlap
overlap: Time;
The duration of the cross-fade between successive grains.
property playbackRate
playbackRate: number;
The playback rate of the sample
property reverse
reverse: boolean;
The direction the buffer should play in
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => GrainPlayerOptions;
method restart
restart: (time?: Seconds, offset?: Time, duration?: Time) => this;
Stop and then restart the player from the beginning (or offset)
Parameter time
When the player should start.
Parameter offset
The offset from the beginning of the sample to start at.
Parameter duration
How long the sample should play. If no duration is given, it will default to the full length of the sample (minus any offset)
class GreaterThan
class GreaterThan extends Signal<'number'> {}
Output 1 if the signal is greater than the value, otherwise outputs 0. can compare two signals or a signal and a number.
Example 1
return Tone.Offline(() => { const gt = new Tone.GreaterThan(2).toDestination(); const sig = new Tone.Signal(4).connect(gt); }, 0.1, 1); Signal
constructor
constructor(value?: number);
Parameter value
The value to compare to
constructor
constructor(options?: Partial<GreaterThanOptions>);
property comparator
readonly comparator: Param<'number'>;
The signal to compare to the incoming signal against.
Example 1
return Tone.Offline(() => { // change the comparison value const gt = new Tone.GreaterThan(1.5).toDestination(); const signal = new Tone.Signal(1).connect(gt); gt.comparator.setValueAtTime(0.5, 0.1); }, 0.5, 1);
property input
readonly input: ToneAudioNode<ToneWithContextOptions>;
property name
readonly name: string;
property output
readonly output: ToneAudioNode<ToneWithContextOptions>;
property override
readonly override: boolean;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => GreaterThanOptions;
class GreaterThanZero
class GreaterThanZero extends SignalOperator<GreaterThanZeroOptions> {}
GreaterThanZero outputs 1 when the input is strictly greater than zero
Example 1
return Tone.Offline(() => { const gt0 = new Tone.GreaterThanZero().toDestination(); const sig = new Tone.Signal(0.5).connect(gt0); sig.setValueAtTime(-1, 0.05); }, 0.1, 1); Signal
constructor
constructor(options?: Partial<ToneWithContextOptions>);
property input
readonly input: ToneAudioNode<ToneWithContextOptions>;
property name
readonly name: string;
property output
readonly output: ToneAudioNode<ToneWithContextOptions>;
method dispose
dispose: () => this;
class JCReverb
class JCReverb extends StereoEffect<JCReverbOptions> {}
JCReverb is a simple [Schroeder Reverberator](https://ccrma.stanford.edu/~jos/pasp/Schroeder_Reverberators.html) tuned by John Chowning in 1970. It is made up of three allpass filters and four FeedbackCombFilter. JCReverb is now implemented with an AudioWorkletNode which may result on performance degradation on some platforms. Consider using Reverb.
Example 1
const reverb = new Tone.JCReverb(0.4).toDestination(); const delay = new Tone.FeedbackDelay(0.5); // connecting the synth to reverb through delay const synth = new Tone.DuoSynth().chain(delay, reverb); synth.triggerAttackRelease("A4", "8n");
Effect
constructor
constructor(roomSize?: number);
Parameter roomSize
Correlated to the decay time.
constructor
constructor(options?: Partial<JCReverbOptions>);
property name
readonly name: string;
property roomSize
readonly roomSize: Signal<'normalRange'>;
Room size control values.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => JCReverbOptions;
class LFO
class LFO extends ToneAudioNode<LFOOptions> {}
LFO stands for low frequency oscillator. LFO produces an output signal which can be attached to an AudioParam or Tone.Signal in order to modulate that parameter with an oscillator. The LFO can also be synced to the transport to start/stop and change when the tempo changes.
Example 1
return Tone.Offline(() => { const lfo = new Tone.LFO("4n", 400, 4000).start().toDestination(); }, 0.5, 1); Source
constructor
constructor(frequency?: Frequency, min?: number, max?: number);
Parameter frequency
The frequency of the oscillation. Typically, LFOs will be in the frequency range of 0.1 to 10 hertz.
Parameter min
The minimum output value of the LFO.
Parameter max
The maximum value of the LFO.
constructor
constructor(options?: Partial<LFOOptions>);
property amplitude
readonly amplitude: Param<'normalRange'>;
The amplitude of the LFO, which controls the output range between the min and max output. For example if the min is -10 and the max is 10, setting the amplitude to 0.5 would make the LFO modulate between -5 and 5.
property convert
convert: boolean;
If the input value is converted using the units
property frequency
readonly frequency: Signal<'frequency'>;
The frequency value of the LFO
property input
readonly input: undefined;
There is no input node
property max
max: number;
The maximum output of the LFO.
property min
min: number;
The minimum output of the LFO.
property name
readonly name: string;
property output
readonly output: OutputNode;
The output of the LFO
property partials
partials: number[];
The oscillator's partials array.
See Also
property phase
phase: number;
The phase of the LFO.
property state
readonly state: BasicPlaybackState;
Returns the playback state of the source, either "started" or "stopped".
property type
type: ToneOscillatorType;
The type of the oscillator.
See Also
property units
units: keyof UnitMap;
The output units of the LFO.
method connect
connect: (node: InputNode, outputNum?: number, inputNum?: number) => this;
Parameter node
the destination to connect to
Parameter outputNum
the optional output number
Parameter inputNum
the input number
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => LFOOptions;
method start
start: (time?: Time) => this;
Start the LFO.
Parameter time
The time the LFO will start
method stop
stop: (time?: Time) => this;
Stop the LFO.
Parameter time
The time the LFO will stop
method sync
sync: () => this;
Sync the start/stop/pause to the transport and the frequency to the bpm of the transport
Example 1
const lfo = new Tone.LFO("8n"); lfo.sync().start(0); // the rate of the LFO will always be an eighth note, even as the tempo changes
method unsync
unsync: () => this;
unsync the LFO from transport control
class Limiter
class Limiter extends ToneAudioNode<LimiterOptions> {}
Limiter will limit the loudness of an incoming signal. Under the hood it's composed of a Compressor with a fast attack and release and max compression ratio.
Example 1
const limiter = new Tone.Limiter(-20).toDestination(); const oscillator = new Tone.Oscillator().connect(limiter); oscillator.start(); Component
constructor
constructor(threshold?: number);
Parameter threshold
The threshold above which the gain reduction is applied.
constructor
constructor(options?: Partial<LimiterOptions>);
property input
readonly input: InputNode;
property name
readonly name: string;
property output
readonly output: OutputNode;
property reduction
readonly reduction: number;
A read-only decibel value for metering purposes, representing the current amount of gain reduction that the compressor is applying to the signal.
property threshold
readonly threshold: Param<'decibels'>;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => LimiterOptions;
class Loop
class Loop< Options extends LoopOptions = LoopOptions> extends ToneWithContext<Options> {}
Loop creates a looped callback at the specified interval. The callback can be started, stopped and scheduled along the Transport's timeline.
Example 1
const loop = new Tone.Loop((time) => { // triggered every eighth note. console.log(time); }, "8n").start(0); Tone.Transport.start(); Event
constructor
constructor(callback?: (time: Seconds) => void, interval?: Time);
Parameter callback
The callback to invoke at the time.
Parameter interval
The time between successive callback calls.
constructor
constructor(options?: Partial<LoopOptions>);
property callback
callback: (time: Seconds) => void;
The callback to invoke with the next event in the pattern
property humanize
humanize: boolean | Time;
Random variation +/-0.01s to the scheduled time. Or give it a time value which it will randomize by.
property interval
interval: Time;
The time between successive callbacks.
Example 1
const loop = new Tone.Loop(); loop.interval = "8n"; // loop every 8n
property iterations
iterations: number;
The number of iterations of the loop. The default value is
Infinity
(loop forever).
property mute
mute: boolean;
Muting the Loop means that no callbacks are invoked.
property name
readonly name: string;
property playbackRate
playbackRate: number;
The playback rate of the loop. The normal playback rate is 1 (no change). A
playbackRate
of 2 would be twice as fast.
property probability
probability: number;
The probably of the callback being invoked.
property progress
readonly progress: number;
The progress of the loop as a value between 0-1. 0, when the loop is stopped or done iterating.
property state
readonly state: BasicPlaybackState;
The state of the Loop, either started or stopped.
method cancel
cancel: (time?: TransportTime) => this;
Cancel all scheduled events greater than or equal to the given time
Parameter time
The time after which events will be cancel.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => LoopOptions;
method start
start: (time?: TransportTime) => this;
Start the loop at the specified time along the Transport's timeline.
Parameter time
When to start the Loop.
method stop
stop: (time?: TransportTime) => this;
Stop the loop at the given time.
Parameter time
When to stop the Loop.
class LowpassCombFilter
class LowpassCombFilter extends ToneAudioNode<LowpassCombFilterOptions> {}
A lowpass feedback comb filter. It is similar to FeedbackCombFilter, but includes a lowpass filter. Component
constructor
constructor(delayTime?: Time, resonance?: number, dampening?: Frequency);
Parameter delayTime
The delay time of the comb filter
Parameter resonance
The resonance (feedback) of the comb filter
Parameter dampening
The cutoff of the lowpass filter dampens the signal as it is fedback.
constructor
constructor(options?: RecursivePartial<LowpassCombFilterOptions>);
property dampening
dampening: Frequency;
The dampening control of the feedback
property delayTime
readonly delayTime: Param<'time'>;
The delayTime of the comb filter.
property input
readonly input: InputNode;
property name
readonly name: string;
property output
readonly output: OutputNode;
property resonance
readonly resonance: Param<'normalRange'>;
The amount of feedback of the delayed signal.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => LowpassCombFilterOptions;
class MembraneSynth
class MembraneSynth extends Synth<MembraneSynthOptions> {}
MembraneSynth makes kick and tom sounds using a single oscillator with an amplitude envelope and frequency ramp. A Tone.OmniOscillator is routed through a Tone.AmplitudeEnvelope to the output. The drum quality of the sound comes from the frequency envelope applied during MembraneSynth.triggerAttack(note). The frequency envelope starts at note * .octaves and ramps to note over the duration of .pitchDecay.
Example 1
const synth = new Tone.MembraneSynth().toDestination(); synth.triggerAttackRelease("C2", "8n"); Instrument
constructor
constructor(options?: RecursivePartial<MembraneSynthOptions>);
Parameter options
the options available for the synth see defaults
property name
readonly name: string;
property octaves
octaves: number;
The number of octaves the pitch envelope ramps. 0.5 8
property pitchDecay
pitchDecay: Time;
The amount of time the frequency envelope takes. 0 0.5
property portamento
readonly portamento: number;
Portamento is ignored in this synth. use pitch decay instead.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MembraneSynthOptions;
method setNote
setNote: (note: Frequency | FrequencyClass, time?: Time) => this;
class Merge
class Merge extends ToneAudioNode<MergeOptions> {}
Merge brings multiple mono input channels into a single multichannel output channel.
Example 1
const merge = new Tone.Merge().toDestination(); // routing a sine tone in the left channel const osc = new Tone.Oscillator().connect(merge, 0, 0).start(); // and noise in the right channel const noise = new Tone.Noise().connect(merge, 0, 1).start();; Component
constructor
constructor(channels?: number);
Parameter channels
The number of channels to merge.
constructor
constructor(options?: Partial<MergeOptions>);
property input
readonly input: ChannelMergerNode;
Multiple input connections combine into a single output.
property name
readonly name: string;
property output
readonly output: ChannelMergerNode;
The output is the input channels combined into a single (multichannel) output
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MergeOptions;
class MetalSynth
class MetalSynth extends Monophonic<MetalSynthOptions> {}
A highly inharmonic and spectrally complex source with a highpass filter and amplitude envelope which is good for making metallophone sounds. Based on CymbalSynth by [@polyrhythmatic](https://github.com/polyrhythmatic). Instrument
constructor
constructor(options?: RecursivePartial<MetalSynthOptions>);
property detune
readonly detune: Signal<'cents'>;
The detune applied to the oscillators
property envelope
readonly envelope: Envelope;
The envelope which is connected both to the amplitude and a highpass filter's cutoff frequency. The lower-limit of the filter is controlled by the resonance
property frequency
readonly frequency: Signal<'frequency'>;
The frequency of the cymbal
property harmonicity
harmonicity: number;
The harmonicity of the oscillators which make up the source. see Tone.FMOscillator.harmonicity 0.1 10
property modulationIndex
modulationIndex: number;
The modulationIndex of the oscillators which make up the source. see FMOscillator.modulationIndex 1 100
property name
readonly name: string;
property octaves
octaves: number;
The number of octaves above the "resonance" frequency that the filter ramps during the attack/decay envelope 0 8
property resonance
resonance: Frequency;
The lower level of the highpass filter which is attached to the envelope. This value should be between [0, 7000] 0 7000
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MetalSynthOptions;
method getLevelAtTime
getLevelAtTime: (time: Time) => NormalRange;
class Meter
class Meter extends MeterBase<MeterOptions> {}
Meter gets the [RMS](https://en.wikipedia.org/wiki/Root_mean_square) of an input signal. It can also get the raw value of the input signal. Setting
normalRange
totrue
will covert the output to a range of 0-1. See an example using a graphical display [here](https://tonejs.github.io/examples/meter).Example 1
const meter = new Tone.Meter(); const mic = new Tone.UserMedia(); mic.open(); // connect mic to the meter mic.connect(meter); // the current level of the mic setInterval(() => console.log(meter.getValue()), 100); Component
See Also
constructor
constructor(smoothing?: number);
Parameter smoothing
The amount of smoothing applied between frames.
constructor
constructor(options?: Partial<MeterOptions>);
property channels
readonly channels: number;
The number of channels of analysis.
property name
readonly name: string;
property normalRange
normalRange: boolean;
If the output should be in decibels or normal range between 0-1. If
normalRange
is false, the output range will be the measured decibel value, otherwise the decibel value will be converted to the range of 0-1
property smoothing
smoothing: number;
A value from between 0 and 1 where 0 represents no time averaging with the last analysis frame.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MeterOptions;
method getLevel
getLevel: () => number | number[];
Use getValue instead. For the previous getValue behavior, use DCMeter.
Deprecated
method getValue
getValue: () => number | number[];
Get the current value of the incoming signal. Output is in decibels when normalRange is
false
. If channels = 1, then the output is a single number representing the value of the input signal. When channels > 1, then each channel is returned as a value in a number array.
class MidiClass
class MidiClass extends FrequencyClass<MidiNote> {}
Midi is a primitive type for encoding Time values. Midi can be constructed with or without the
new
keyword. Midi can be passed into the parameter of any method which takes time as an argument. Unit
property defaultUnits
readonly defaultUnits: string;
property name
readonly name: string;
method toFrequency
toFrequency: () => Hertz;
Return the value of the frequency as a MIDI note
Example 1
Tone.Midi(60).toFrequency(); // 261.6255653005986
method toMidi
toMidi: () => MidiNote;
Return the value of the frequency as a MIDI note
Example 1
Tone.Midi(60).toMidi(); // 60
method transpose
transpose: (interval: Interval) => MidiClass;
Transposes the frequency by the given number of semitones. A new transposed MidiClass
Example 1
Tone.Midi("A4").transpose(3); // "C5"
class MidSideCompressor
class MidSideCompressor extends ToneAudioNode<MidSideCompressorOptions> {}
MidSideCompressor applies two different compressors to the mid and side signal components of the input.
See Also
MidSideSplit and MidSideMerge. Component
constructor
constructor(options?: RecursivePartial<MidSideCompressorOptions>);
property input
readonly input: InputNode;
property mid
readonly mid: Compressor;
The compression applied to the mid signal
property name
readonly name: string;
property output
readonly output: OutputNode;
property side
readonly side: Compressor;
The compression applied to the side signal
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MidSideCompressorOptions;
class MidSideMerge
class MidSideMerge extends ToneAudioNode<MidSideMergeOptions> {}
MidSideMerge merges the mid and side signal after they've been separated by MidSideSplit
Mid = (Left+Right)/sqrt(2); // obtain mid-signal from left and rightSide = (Left-Right)/sqrt(2); // obtain side-signal from left and rightComponent
constructor
constructor(options?: Partial<ToneWithContextOptions>);
property input
readonly input: undefined;
property mid
readonly mid: ToneAudioNode<ToneWithContextOptions>;
The "mid" input.
property name
readonly name: string;
property output
readonly output: Merge;
The merged signal
property side
readonly side: ToneAudioNode<ToneWithContextOptions>;
The "side" input.
method dispose
dispose: () => this;
class MidSideSplit
class MidSideSplit extends ToneAudioNode<MidSideSplitOptions> {}
Mid/Side processing separates the the 'mid' signal (which comes out of both the left and the right channel) and the 'side' (which only comes out of the the side channels).
Mid = (Left+Right)/sqrt(2); // obtain mid-signal from left and rightSide = (Left-Right)/sqrt(2); // obtain side-signal from left and rightComponent
constructor
constructor(options?: Partial<ToneWithContextOptions>);
property input
readonly input: Split;
property mid
readonly mid: ToneAudioNode<ToneWithContextOptions>;
The "mid" output.
(Left+Right)/sqrt(2)
property name
readonly name: string;
property output
readonly output: undefined;
property side
readonly side: ToneAudioNode<ToneWithContextOptions>;
The "side" output.
(Left-Right)/sqrt(2)
method dispose
dispose: () => this;
class Mono
class Mono extends ToneAudioNode<MonoOptions> {}
Mono coerces the incoming mono or stereo signal into a mono signal where both left and right channels have the same value. This can be useful for [stereo imaging](https://en.wikipedia.org/wiki/Stereo_imaging). Component
constructor
constructor(options?: Partial<ToneWithContextOptions>);
property input
readonly input: Gain<'gain'>;
The stereo signal to sum to mono
property name
readonly name: string;
property output
readonly output: OutputNode;
The summed output of the multiple inputs
method dispose
dispose: () => this;
class MonoSynth
class MonoSynth extends Monophonic<MonoSynthOptions> {}
MonoSynth is composed of one
oscillator
, onefilter
, and twoenvelopes
. The amplitude of the Oscillator and the cutoff frequency of the Filter are controlled by Envelopes.Example 1
const synth = new Tone.MonoSynth({ oscillator: { type: "square" }, envelope: { attack: 0.1 } }).toDestination(); synth.triggerAttackRelease("C4", "8n"); Instrument
constructor
constructor(options?: RecursivePartial<MonoSynthOptions>);
property detune
readonly detune: Signal<'cents'>;
The detune control.
property envelope
readonly envelope: AmplitudeEnvelope;
The amplitude envelope.
property filter
readonly filter: Filter;
The filter.
property filterEnvelope
readonly filterEnvelope: FrequencyEnvelope;
The filter envelope.
property frequency
readonly frequency: Signal<'frequency'>;
The frequency control.
property name
readonly name: string;
property oscillator
readonly oscillator: OmniOscillator<any>;
The oscillator.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MonoSynthOptions;
method getLevelAtTime
getLevelAtTime: (time: Time) => NormalRange;
class MultibandCompressor
class MultibandCompressor extends ToneAudioNode<MultibandCompressorOptions> {}
A compressor with separate controls over low/mid/high dynamics.
Example 1
const multiband = new Tone.MultibandCompressor({ lowFrequency: 200, highFrequency: 1300, low: { threshold: -12 } }); Component
See Also
constructor
constructor(options?: RecursivePartial<MultibandCompressorOptions>);
property high
readonly high: Compressor;
The compressor applied to the high frequencies
property highFrequency
readonly highFrequency: Signal<'frequency'>;
mid/high crossover frequency.
property input
readonly input: InputNode;
property low
readonly low: Compressor;
The compressor applied to the low frequencies
property lowFrequency
readonly lowFrequency: Signal<'frequency'>;
low/mid crossover frequency.
property mid
readonly mid: Compressor;
The compressor applied to the mid frequencies
property name
readonly name: string;
property output
readonly output: ToneAudioNode<ToneWithContextOptions>;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => MultibandCompressorOptions;
class MultibandSplit
class MultibandSplit extends ToneAudioNode<MultibandSplitOptions> {}
Split the incoming signal into three bands (low, mid, high) with two crossover frequency controls.
+----------------------++-> input < lowFrequency +------------------> low| +----------------------+|| +--------------------------------------+input ---+-> lowFrequency < input < highFrequency +--> mid| +--------------------------------------+|| +-----------------------++-> highFrequency < input +-----------------> high+-----------------------+Component
constructor
constructor(lowFrequency?: Frequency, highFrequency?: Frequency);
Parameter lowFrequency
the low/mid crossover frequency
Parameter highFrequency
the mid/high crossover frequency
constructor
constructor(options?: Partial<MultibandSplitOptions>);
property high
readonly high: Filter;
The high band output.
property highFrequency
readonly highFrequency: Signal<'frequency'>;
The mid/high crossover frequency.
property input
readonly input: Gain<'gain'>;
the input
property low
readonly low: Filter;
The low band.
property lowFrequency
readonly lowFrequency: Signal<'frequency'>;
The low/mid crossover frequency.
property mid
readonly mid: Filter;
The mid band output.
property name
readonly name: string;
property output
readonly output: undefined;
no output node, use either low, mid or high outputs
property Q
readonly Q: Signal<'positive'>;
The Q or Quality of the filter
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => MultibandSplitOptions;
class Multiply
class Multiply< TypeName extends 'number' | 'positive' = 'number'> extends Signal<TypeName> {}
Multiply two incoming signals. Or, if a number is given in the constructor, multiplies the incoming signal by that value.
Example 1
// multiply two signals const mult = new Tone.Multiply(); const sigA = new Tone.Signal(3); const sigB = new Tone.Signal(4); sigA.connect(mult); sigB.connect(mult.factor); // output of mult is 12.
Example 2
// multiply a signal and a number const mult = new Tone.Multiply(10); const sig = new Tone.Signal(2).connect(mult); // the output of mult is 20. Signal
constructor
constructor(value?: number);
Parameter value
Constant value to multiple
constructor
constructor(options?: Partial<SignalOptions<TypeName>>);
property factor
factor: Param<TypeName>;
The multiplication factor. Can be set directly or a signal can be connected to it.
property input
input: InputNode;
The multiplicand input.
property name
readonly name: string;
property output
output: OutputNode;
The product of the input and factor
property override
readonly override: boolean;
Indicates if the value should be overridden on connection
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => SignalOptions<any>;
class Negate
class Negate extends SignalOperator<ToneAudioNodeOptions> {}
Negate the incoming signal. i.e. an input signal of 10 will output -10
Example 1
const neg = new Tone.Negate(); const sig = new Tone.Signal(-2).connect(neg); // output of neg is positive 2. Signal
class Noise
class Noise extends Source<NoiseOptions> {}
Noise is a noise generator. It uses looped noise buffers to save on performance. Noise supports the noise types: "pink", "white", and "brown". Read more about colors of noise on [Wikipedia](https://en.wikipedia.org/wiki/Colors_of_noise).
Example 1
// initialize the noise and start const noise = new Tone.Noise("pink").start(); // make an autofilter to shape the noise const autoFilter = new Tone.AutoFilter({ frequency: "8n", baseFrequency: 200, octaves: 8 }).toDestination().start(); // connect the noise noise.connect(autoFilter); // start the autofilter LFO autoFilter.start(); Source
constructor
constructor(type?: NoiseType);
Parameter type
the noise type (white|pink|brown)
constructor
constructor(options?: Partial<NoiseOptions>);
property fadeIn
fadeIn: Time;
The fadeIn time of the amplitude envelope.
property fadeOut
fadeOut: Time;
The fadeOut time of the amplitude envelope.
property name
readonly name: string;
property playbackRate
playbackRate: number;
The playback rate of the noise. Affects the "frequency" of the noise.
property type
type: NoiseType;
The type of the noise. Can be "white", "brown", or "pink".
Example 1
const noise = new Tone.Noise().toDestination().start(); noise.type = "brown";
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => NoiseOptions;
class NoiseSynth
class NoiseSynth extends Instrument<NoiseSynthOptions> {}
Tone.NoiseSynth is composed of Noise through an AmplitudeEnvelope.
+-------+ +-------------------+| Noise +>--> AmplitudeEnvelope +>--> Output+-------+ +-------------------+Example 1
const noiseSynth = new Tone.NoiseSynth().toDestination(); noiseSynth.triggerAttackRelease("8n", 0.05); Instrument
constructor
constructor(options?: RecursivePartial<NoiseSynthOptions>);
property envelope
readonly envelope: AmplitudeEnvelope;
The amplitude envelope.
property name
readonly name: string;
property noise
readonly noise: Noise;
The noise source.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => NoiseSynthOptions;
method sync
sync: () => this;
method triggerAttack
triggerAttack: (time?: Time, velocity?: NormalRange) => this;
Start the attack portion of the envelopes. Unlike other instruments, Tone.NoiseSynth doesn't have a note.
Example 1
const noiseSynth = new Tone.NoiseSynth().toDestination(); noiseSynth.triggerAttack();
method triggerAttackRelease
triggerAttackRelease: ( duration: Time, time?: Time, velocity?: NormalRange) => this;
Trigger the attack and then the release after the duration.
Parameter duration
The amount of time to hold the note for
Parameter time
The time the note should start
Parameter velocity
The volume of the note (0-1)
Example 1
const noiseSynth = new Tone.NoiseSynth().toDestination(); // hold the note for 0.5 seconds noiseSynth.triggerAttackRelease(0.5);
method triggerRelease
triggerRelease: (time?: Time) => this;
Start the release portion of the envelopes.
class OfflineContext
class OfflineContext extends Context {}
Wrapper around the OfflineAudioContext Core
Example 1
// generate a single channel, 0.5 second buffer const context = new Tone.OfflineContext(1, 0.5, 44100); const osc = new Tone.Oscillator({ context }); context.render().then(buffer => { console.log(buffer.numberOfChannels, buffer.duration); });
constructor
constructor(channels: number, duration: number, sampleRate: number);
Parameter channels
The number of channels to render
Parameter duration
The duration to render in seconds
Parameter sampleRate
the sample rate to render at
constructor
constructor(context: OfflineAudioContext);
property currentTime
readonly currentTime: number;
Same as this.now()
property isOffline
readonly isOffline: boolean;
property name
readonly name: string;
method close
close: () => Promise<void>;
Close the context
method now
now: () => Seconds;
Override the now method to point to the internal clock time
method render
render: (asynchronous?: boolean) => Promise<ToneAudioBuffer>;
Render the output of the OfflineContext
Parameter asynchronous
If the clock should be rendered asynchronously, which will not block the main thread, but be slightly slower.
class OmniOscillator
class OmniOscillator<OscType extends AnyOscillator> extends Source<OmniOscillatorOptions> implements Omit<ToneOscillatorInterface, 'type'> {}
OmniOscillator aggregates all of the oscillator types into one.
Example 1
return Tone.Offline(() => { const omniOsc = new Tone.OmniOscillator("C#4", "pwm").toDestination().start(); }, 0.1, 1); Source
constructor
constructor(frequency?: Frequency, type?: OmniOscillatorType);
Parameter frequency
The initial frequency of the oscillator.
Parameter type
The type of the oscillator.
constructor
constructor(options?: Partial<OmniOscillatorOptions>);
property baseType
baseType: OscillatorType | 'pulse' | 'pwm';
The base type of the oscillator.
Example 1
const omniOsc = new Tone.OmniOscillator(440, "fmsquare4"); console.log(omniOsc.sourceType, omniOsc.baseType, omniOsc.partialCount);
See Also
property count
count: number;
The number of detuned oscillators when sourceType === "fat".
See Also
property detune
readonly detune: Signal<'cents'>;
property frequency
readonly frequency: Signal<'frequency'>;
property harmonicity
readonly harmonicity: Signal<'positive'>;
Harmonicity is the frequency ratio between the carrier and the modulator oscillators.
See Also
property modulationFrequency
readonly modulationFrequency: Signal<'frequency'>;
The modulationFrequency Signal of the oscillator when sourceType === "pwm" see PWMOscillator 0.1 5
property modulationIndex
readonly modulationIndex: Signal<'positive'>;
The modulation index when the sourceType === "fm"
See Also
property modulationType
modulationType: ToneOscillatorType;
The type of the modulator oscillator. Only if the oscillator is set to "am" or "fm" types.
See Also
property name
readonly name: string;
property partialCount
partialCount: number;
property partials
partials: number[];
The value is an empty array when the type is not "custom". This is not available on "pwm" and "pulse" oscillator types.
See Also
property phase
phase: number;
property sourceType
sourceType: keyof OmniOscillatorSource;
The source type of the oscillator.
Example 1
const omniOsc = new Tone.OmniOscillator(440, "fmsquare"); console.log(omniOsc.sourceType); // 'fm'
property spread
spread: number;
The detune spread between the oscillators when sourceType === "fat".
See Also
property type
type: OmniOscillatorType;
The type of the oscillator. Can be any of the basic types: sine, square, triangle, sawtooth. Or prefix the basic types with "fm", "am", or "fat" to use the FMOscillator, AMOscillator or FatOscillator types. The oscillator could also be set to "pwm" or "pulse". All of the parameters of the oscillator's class are accessible when the oscillator is set to that type, but throws an error when it's not.
Example 1
const omniOsc = new Tone.OmniOscillator().toDestination().start(); omniOsc.type = "pwm"; // modulationFrequency is parameter which is available // only when the type is "pwm". omniOsc.modulationFrequency.value = 0.5;
property width
readonly width: Signal<'audioRange'>;
The width of the oscillator when sourceType === "pulse".
See Also
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => OmniOscillatorOptions;
method set
set: (props: Partial<OmniOscillatorOptions>) => this;
class OnePoleFilter
class OnePoleFilter extends ToneAudioNode<OnePoleFilterOptions> {}
A one pole filter with 6db-per-octave rolloff. Either "highpass" or "lowpass". Note that changing the type or frequency may result in a discontinuity which can sound like a click or pop. References: * http://www.earlevel.com/main/2012/12/15/a-one-pole-filter/ * http://www.dspguide.com/ch19/2.htm * https://github.com/vitaliy-bobrov/js-rocks/blob/master/src/app/audio/effects/one-pole-filters.ts Component
constructor
constructor(frequency?: Frequency, type?: OnePoleFilterType);
Parameter frequency
The frequency
Parameter type
The filter type, either "lowpass" or "highpass"
constructor
constructor(options?: Partial<OnePoleFilterOptions>);
property frequency
frequency: Frequency;
The frequency value.
property input
readonly input: Gain<'gain'>;
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
property type
type: OnePoleFilterType;
The OnePole Filter type, either "highpass" or "lowpass"
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => OnePoleFilterOptions;
method getFrequencyResponse
getFrequencyResponse: (len?: number) => Float32Array;
Get the frequency response curve. This curve represents how the filter responses to frequencies between 20hz-20khz.
Parameter len
The number of values to return The frequency response curve between 20-20kHz
class Oscillator
class Oscillator extends Source<ToneOscillatorOptions> implements ToneOscillatorInterface {}
Oscillator supports a number of features including phase rotation, multiple oscillator types (see Oscillator.type), and Transport syncing (see Oscillator.syncFrequency).
Example 1
// make and start a 440hz sine tone const osc = new Tone.Oscillator(440, "sine").toDestination().start(); Source
constructor
constructor(frequency?: Frequency, type?: ToneOscillatorType);
Parameter frequency
Starting frequency
Parameter type
The oscillator type. Read more about type below.
constructor
constructor(options?: Partial<ToneOscillatorConstructorOptions>);
property baseType
baseType: OscillatorType;
property detune
detune: Signal<'cents'>;
The detune control signal.
property frequency
frequency: Signal<'frequency'>;
The frequency control.
property name
readonly name: string;
property partialCount
partialCount: number;
property partials
partials: number[];
property phase
phase: number;
property type
type: ToneOscillatorType;
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ToneOscillatorOptions;
method getInitialValue
getInitialValue: () => AudioRange;
Returns the initial value of the oscillator when stopped. E.g. a "sine" oscillator with phase = 90 would return an initial value of -1.
method syncFrequency
syncFrequency: () => this;
Sync the signal to the Transport's bpm. Any changes to the transports bpm, will also affect the oscillators frequency.
Example 1
const osc = new Tone.Oscillator().toDestination().start(); osc.frequency.value = 440; // the ratio between the bpm and the frequency will be maintained osc.syncFrequency(); // double the tempo Tone.Transport.bpm.value *= 2; // the frequency of the oscillator is doubled to 880
method unsyncFrequency
unsyncFrequency: () => this;
Unsync the oscillator's frequency from the Transport.
See Also
class Panner
class Panner extends ToneAudioNode<TonePannerOptions> {}
Panner is an equal power Left/Right Panner. It is a wrapper around the StereoPannerNode.
Example 1
return Tone.Offline(() => { // move the input signal from right to left const panner = new Tone.Panner(1).toDestination(); panner.pan.rampTo(-1, 0.5); const osc = new Tone.Oscillator(100).connect(panner).start(); }, 0.5, 2); Component
constructor
constructor(options?: Partial<TonePannerOptions>);
constructor
constructor(pan?: number);
Parameter pan
The initial panner value (Defaults to 0 = "center").
property input
readonly input: StereoPannerNode;
property name
readonly name: string;
property output
readonly output: StereoPannerNode;
property pan
readonly pan: Param<'audioRange'>;
The pan control. -1 = hard left, 1 = hard right. -1 1
Example 1
return Tone.Offline(() => { // pan hard right const panner = new Tone.Panner(1).toDestination(); // pan hard left panner.pan.setValueAtTime(-1, 0.25); const osc = new Tone.Oscillator(50, "triangle").connect(panner).start(); }, 0.5, 2);
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => TonePannerOptions;
class Panner3D
class Panner3D extends ToneAudioNode<Panner3DOptions> {}
A spatialized panner node which supports equalpower or HRTF panning. Component
constructor
constructor(positionX: number, positionY: number, positionZ: number);
Parameter positionX
The initial x position.
Parameter positionY
The initial y position.
Parameter positionZ
The initial z position.
constructor
constructor(options?: Partial<Panner3DOptions>);
property coneInnerAngle
coneInnerAngle: number;
The angle, in degrees, inside of which there will be no volume reduction
property coneOuterAngle
coneOuterAngle: number;
The angle, in degrees, outside of which the volume will be reduced to a constant value of coneOuterGain
property coneOuterGain
coneOuterGain: number;
The gain outside of the coneOuterAngle
property distanceModel
distanceModel: DistanceModelType;
The distance model used by, "linear", "inverse", or "exponential".
property input
readonly input: PannerNode;
property maxDistance
maxDistance: number;
The maximum distance between source and listener, after which the volume will not be reduced any further.
property name
readonly name: string;
property orientationX
readonly orientationX: Param<'number'>;
property orientationY
readonly orientationY: Param<'number'>;
property orientationZ
readonly orientationZ: Param<'number'>;
property output
readonly output: PannerNode;
property panningModel
panningModel: PanningModelType;
The panning model. Either "equalpower" or "HRTF".
property positionX
readonly positionX: Param<'number'>;
property positionY
readonly positionY: Param<'number'>;
property positionZ
readonly positionZ: Param<'number'>;
property refDistance
refDistance: number;
A reference distance for reducing volume as source move further from the listener
property rolloffFactor
rolloffFactor: number;
Describes how quickly the volume is reduced as source moves away from listener.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => Panner3DOptions;
method setOrientation
setOrientation: (x: number, y: number, z: number) => this;
Sets the orientation of the source in 3d space.
method setPosition
setPosition: (x: number, y: number, z: number) => this;
Sets the position of the source in 3d space.
class PanVol
class PanVol extends ToneAudioNode<PanVolOptions> {}
PanVol is a Tone.Panner and Tone.Volume in one.
Example 1
// pan the incoming signal left and drop the volume const panVol = new Tone.PanVol(-0.25, -12).toDestination(); const osc = new Tone.Oscillator().connect(panVol).start(); Component
constructor
constructor(pan?: number, volume?: number);
Parameter pan
the initial pan
Parameter volume
The output volume.
constructor
constructor(options?: Partial<PanVolOptions>);
property input
readonly input: InputNode;
property mute
mute: boolean;
Mute/unmute the volume
property name
readonly name: string;
property output
readonly output: OutputNode;
property pan
readonly pan: Param<'audioRange'>;
The L/R panning control. -1 = hard left, 1 = hard right. -1 1
property volume
readonly volume: Param<'decibels'>;
The volume control in decibels.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PanVolOptions;
class Param
class Param<TypeName extends UnitName = 'number'> extends ToneWithContext<ParamOptions<TypeName>> implements AbstractParam<TypeName> {}
Param wraps the native Web Audio's AudioParam to provide additional unit conversion functionality. It also serves as a base-class for classes which have a single, automatable parameter. Core
constructor
constructor(param: AudioParam, units?: keyof UnitMap, convert?: boolean);
Parameter param
The AudioParam to wrap
Parameter units
The unit name
Parameter convert
Whether or not to convert the value to the target units
constructor
constructor(options: Partial<ParamOptions<TypeName>>);
property convert
convert: boolean;
property defaultValue
readonly defaultValue: string | number | TimeObject;
property input
readonly input: GainNode | AudioParam;
property maxValue
readonly maxValue: number;
property minValue
readonly minValue: number;
property name
readonly name: string;
property overridden
overridden: boolean;
property units
readonly units: keyof UnitMap;
property value
value: string | number | TimeObject;
method apply
apply: (param: Param | AudioParam) => this;
Apply all of the previously scheduled events to the passed in Param or AudioParam. The applied values will start at the context's current time and schedule all of the events which are scheduled on this Param onto the passed in param.
method cancelAndHoldAtTime
cancelAndHoldAtTime: (time: Time) => this;
method cancelScheduledValues
cancelScheduledValues: (time: Time) => this;
method dispose
dispose: () => this;
method exponentialApproachValueAtTime
exponentialApproachValueAtTime: ( value: UnitMap[TypeName], time: Time, rampTime: Time) => this;
method exponentialRampTo
exponentialRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
method exponentialRampToValueAtTime
exponentialRampToValueAtTime: (value: UnitMap[TypeName], endTime: Time) => this;
method getDefaults
static getDefaults: () => ParamOptions<any>;
method getValueAtTime
getValueAtTime: (time: Time) => UnitMap[TypeName];
method linearRampTo
linearRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
method linearRampToValueAtTime
linearRampToValueAtTime: (value: UnitMap[TypeName], endTime: Time) => this;
method rampTo
rampTo: (value: UnitMap[TypeName], rampTime?: Time, startTime?: Time) => this;
method setParam
setParam: (param: AudioParam) => this;
Replace the Param's internal AudioParam. Will apply scheduled curves onto the parameter and replace the connections.
method setRampPoint
setRampPoint: (time: Time) => this;
method setTargetAtTime
setTargetAtTime: ( value: UnitMap[TypeName], startTime: Time, timeConstant: Positive) => this;
method setValueAtTime
setValueAtTime: (value: UnitMap[TypeName], time: Time) => this;
method setValueCurveAtTime
setValueCurveAtTime: ( values: UnitMap[TypeName][], startTime: Time, duration: Time, scaling?: number) => this;
method targetRampTo
targetRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
class Part
class Part<ValueType = any> extends ToneEvent<ValueType> {}
Part is a collection ToneEvents which can be started/stopped and looped as a single unit.
Example 1
const synth = new Tone.Synth().toDestination(); const part = new Tone.Part(((time, note) => { // the notes given as the second element in the array // will be passed in as the second argument synth.triggerAttackRelease(note, "8n", time); }), [[0, "C2"], ["0:2", "C3"], ["0:3:2", "G2"]]).start(0); Tone.Transport.start();
Example 2
const synth = new Tone.Synth().toDestination(); // use an array of objects as long as the object has a "time" attribute const part = new Tone.Part(((time, value) => { // the value is an object which contains both the note and the velocity synth.triggerAttackRelease(value.note, "8n", time, value.velocity); }), [{ time: 0, note: "C3", velocity: 0.9 }, { time: "0:2", note: "C4", velocity: 0.5 } ]).start(0); Tone.Transport.start(); Event
constructor
constructor( callback?: ToneEventCallback<CallbackType<ValueType>>, value?: ValueType[]);
Parameter callback
The callback to invoke on each event
Parameter value
the array of events
constructor
constructor(options?: Partial<PartOptions<ValueType>>);
property humanize
humanize: boolean | Time;
property length
readonly length: number;
The number of scheduled notes in the part.
property loop
loop: number | boolean;
If the part should loop or not between Part.loopStart and Part.loopEnd. If set to true, the part will loop indefinitely, if set to a number greater than 1 it will play a specific number of times, if set to false, 0 or 1, the part will only play once.
Example 1
const part = new Tone.Part(); // loop the part 8 times part.loop = 8;
property loopEnd
loopEnd: Time;
The loopEnd point determines when it will loop if Part.loop is true.
property loopStart
loopStart: Time;
The loopStart point determines when it will loop if Part.loop is true.
property name
readonly name: string;
property playbackRate
playbackRate: number;
The playback rate of the part
property probability
probability: number;
property startOffset
startOffset: number;
method add
add: { (obj: { [key: string]: any; time: Time }): this; (time: Time, value?: any): this;};
Add a an event to the part.
Parameter time
The time the note should start. If an object is passed in, it should have a 'time' attribute and the rest of the object will be used as the 'value'.
Parameter value
Any value to add to the timeline
Example 1
const part = new Tone.Part(); part.add("1m", "C#+11");
method at
at: (time: Time, value?: any) => ToneEvent | null;
Get/Set an Event's value at the given time. If a value is passed in and no event exists at the given time, one will be created with that value. If two events are at the same time, the first one will be returned.
Parameter time
The time of the event to get or set.
Parameter value
If a value is passed in, the value of the event at the given time will be set to it.
Example 1
const part = new Tone.Part(); part.at("1m"); // returns the part at the first measure part.at("2m", "C2"); // set the value at "2m" to C2. // if an event didn't exist at that time, it will be created.
method cancel
cancel: (after?: TransportTime | TransportTimeClass) => this;
Cancel scheduled state change events: i.e. "start" and "stop".
Parameter after
The time after which to cancel the scheduled events.
method clear
clear: () => this;
Remove all of the notes from the group.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PartOptions<any>;
method remove
remove: { (obj: { [key: string]: any; time: Time }): this; (time: Time, value?: any): this;};
Remove an event from the part. If the event at that time is a Part, it will remove the entire part.
Parameter time
The time of the event
Parameter value
Optionally select only a specific event value
method start
start: (time?: TransportTime, offset?: Time) => this;
Start the part at the given time.
Parameter time
When to start the part.
Parameter offset
The offset from the start of the part to begin playing at.
method stop
stop: (time?: TransportTime) => this;
Stop the part at the given time.
Parameter time
When to stop the part.
class Pattern
class Pattern<ValueType> extends Loop<PatternOptions<ValueType>> {}
Pattern arpeggiates between the given notes in a number of patterns.
Example 1
const pattern = new Tone.Pattern((time, note) => { // the order of the notes passed in depends on the pattern }, ["C2", "D4", "E5", "A6"], "upDown"); Event
constructor
constructor( callback?: ToneEventCallback<ValueType>, values?: ValueType[], pattern?: PatternName);
Parameter callback
The callback to invoke with the event.
Parameter values
The values to arpeggiate over.
Parameter pattern
The name of the pattern
constructor
constructor(options?: Partial<PatternOptions<ValueType>>);
property callback
callback: (time: Seconds, value?: ValueType) => void;
The callback to be invoked at a regular interval
property index
readonly index: number;
The current index of the pattern.
property name
readonly name: string;
property pattern
pattern: PatternName;
The pattern type.
property value
readonly value: {};
The current value of the pattern.
property values
values: ValueType[];
The array of events.
method getDefaults
static getDefaults: () => PatternOptions<any>;
class Phaser
class Phaser extends StereoEffect<PhaserOptions> {}
Phaser is a phaser effect. Phasers work by changing the phase of different frequency components of an incoming signal. Read more on [Wikipedia](https://en.wikipedia.org/wiki/Phaser_(effect)). Inspiration for this phaser comes from [Tuna.js](https://github.com/Dinahmoe/tuna/).
Example 1
const phaser = new Tone.Phaser({ frequency: 15, octaves: 5, baseFrequency: 1000 }).toDestination(); const synth = new Tone.FMSynth().connect(phaser); synth.triggerAttackRelease("E3", "2n"); Effect
constructor
constructor(frequency?: Frequency, octaves?: number, baseFrequency?: Frequency);
Parameter frequency
The speed of the phasing.
Parameter octaves
The octaves of the effect.
Parameter baseFrequency
The base frequency of the filters.
constructor
constructor(options?: Partial<PhaserOptions>);
property baseFrequency
baseFrequency: Frequency;
The the base frequency of the filters.
property frequency
readonly frequency: Signal<'frequency'>;
the frequency of the effect
property name
readonly name: string;
property octaves
octaves: number;
The number of octaves the phase goes above the baseFrequency
property Q
readonly Q: Signal<'positive'>;
The quality factor of the filters
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PhaserOptions;
class PingPongDelay
class PingPongDelay extends StereoXFeedbackEffect<PingPongDelayOptions> {}
PingPongDelay is a feedback delay effect where the echo is heard first in one channel and next in the opposite channel. In a stereo system these are the right and left channels. PingPongDelay in more simplified terms is two Tone.FeedbackDelays with independent delay values. Each delay is routed to one channel (left or right), and the channel triggered second will always trigger at the same interval after the first.
Example 1
const pingPong = new Tone.PingPongDelay("4n", 0.2).toDestination(); const drum = new Tone.MembraneSynth().connect(pingPong); drum.triggerAttackRelease("C4", "32n"); Effect
constructor
constructor(delayTime?: Time, feedback?: number);
Parameter delayTime
The delayTime between consecutive echos.
Parameter feedback
The amount of the effected signal which is fed back through the delay.
constructor
constructor(options?: Partial<PingPongDelayOptions>);
property delayTime
readonly delayTime: Signal<'time'>;
the delay time signal
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PingPongDelayOptions;
class PitchShift
class PitchShift extends FeedbackEffect<PitchShiftOptions> {}
PitchShift does near-realtime pitch shifting to the incoming signal. The effect is achieved by speeding up or slowing down the delayTime of a DelayNode using a sawtooth wave. Algorithm found in [this pdf](http://dsp-book.narod.ru/soundproc.pdf). Additional reference by [Miller Pucket](http://msp.ucsd.edu/techniques/v0.11/book-html/node115.html). Effect
constructor
constructor(pitch?: number);
Parameter pitch
The interval to transpose the incoming signal by.
constructor
constructor(options?: Partial<PitchShiftOptions>);
property delayTime
readonly delayTime: Param<'time'>;
The amount of delay on the input signal
property name
readonly name: string;
property pitch
pitch: number;
Repitch the incoming signal by some interval (measured in semi-tones).
Example 1
const pitchShift = new Tone.PitchShift().toDestination(); const osc = new Tone.Oscillator().connect(pitchShift).start().toDestination(); pitchShift.pitch = -12; // down one octave pitchShift.pitch = 7; // up a fifth
property windowSize
windowSize: number;
The window size corresponds roughly to the sample length in a looping sampler. Smaller values are desirable for a less noticeable delay time of the pitch shifted signal, but larger values will result in smoother pitch shifting for larger intervals. A nominal range of 0.03 to 0.1 is recommended.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PitchShiftOptions;
class Player
class Player extends Source<PlayerOptions> {}
Player is an audio file player with start, loop, and stop functions.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/gong_1.mp3").toDestination(); // play as soon as the buffer is loaded player.autostart = true; Source
constructor
constructor(url?: string | ToneAudioBuffer | AudioBuffer, onload?: () => void);
Parameter url
Either the AudioBuffer or the url from which to load the AudioBuffer
Parameter onload
The function to invoke when the buffer is loaded.
constructor
constructor(options?: Partial<PlayerOptions>);
property autostart
autostart: boolean;
If the file should play as soon as the buffer is loaded.
property buffer
buffer: ToneAudioBuffer;
The audio buffer belonging to the player.
property fadeIn
fadeIn: Time;
The fadeIn time of the amplitude envelope.
property fadeOut
fadeOut: Time;
The fadeOut time of the amplitude envelope.
property loaded
readonly loaded: boolean;
If the buffer is loaded
property loop
loop: boolean;
If the buffer should loop once it's over.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/breakbeat.mp3").toDestination(); player.loop = true; player.autostart = true;
property loopEnd
loopEnd: Time;
If loop is true, the loop will end at this position.
property loopStart
loopStart: Time;
If loop is true, the loop will start at this position.
property name
readonly name: string;
property playbackRate
playbackRate: number;
Normal speed is 1. The pitch will change with the playback rate.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/femalevoices_aa2_A5.mp3").toDestination(); // play at 1/4 speed player.playbackRate = 0.25; // play as soon as the buffer is loaded player.autostart = true;
property reverse
reverse: boolean;
If the buffer should be reversed. Note that this sets the underlying ToneAudioBuffer.reverse, so if multiple players are pointing at the same ToneAudioBuffer, they will all be reversed.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/chime_1.mp3").toDestination(); player.autostart = true; player.reverse = true;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PlayerOptions;
method load
load: (url: string) => Promise<this>;
Load the audio file as an audio buffer. Decodes the audio asynchronously and invokes the callback once the audio buffer loads. Note: this does not need to be called if a url was passed in to the constructor. Only use this if you want to manually load a new url.
Parameter url
The url of the buffer to load. Filetype support depends on the browser.
method restart
restart: (time?: Seconds, offset?: Time, duration?: Time) => this;
Stop and then restart the player from the beginning (or offset)
Parameter time
When the player should start.
Parameter offset
The offset from the beginning of the sample to start at.
Parameter duration
How long the sample should play. If no duration is given, it will default to the full length of the sample (minus any offset)
method seek
seek: (offset: Time, when?: Time) => this;
Seek to a specific time in the player's buffer. If the source is no longer playing at that time, it will stop.
Parameter offset
The time to seek to.
Parameter when
The time for the seek event to occur.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/gurgling_theremin_1.mp3", () => { player.start(); // seek to the offset in 1 second from now player.seek(0.4, "+1"); }).toDestination();
method setLoopPoints
setLoopPoints: (loopStart: Time, loopEnd: Time) => this;
Set the loop start and end. Will only loop if loop is set to true.
Parameter loopStart
The loop start time
Parameter loopEnd
The loop end time
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/berklee/malevoices_aa2_F3.mp3").toDestination(); // loop between the given points player.setLoopPoints(0.2, 0.3); player.loop = true; player.autostart = true;
method start
start: (time?: Time, offset?: Time, duration?: Time) => this;
Play the buffer at the given startTime. Optionally add an offset and/or duration which will play the buffer from a position within the buffer for the given duration.
Parameter time
When the player should start.
Parameter offset
The offset from the beginning of the sample to start at.
Parameter duration
How long the sample should play. If no duration is given, it will default to the full length of the sample (minus any offset)
class Players
class Players extends ToneAudioNode<PlayersOptions> {}
Players combines multiple Player objects. Source
constructor
constructor(urls?: ToneAudioBuffersUrlMap, onload?: () => void);
Parameter urls
An object mapping a name to a url.
Parameter onload
The function to invoke when all buffers are loaded.
constructor
constructor( urls?: ToneAudioBuffersUrlMap, options?: Partial<Omit<PlayersOptions, 'urls'>>);
Parameter urls
An object mapping a name to a url.
Parameter options
The remaining options associated with the players
constructor
constructor(options?: Partial<PlayersOptions>);
property fadeIn
fadeIn: Time;
The fadeIn time of the envelope applied to the source.
property fadeOut
fadeOut: Time;
The fadeOut time of the each of the sources.
property input
readonly input: undefined;
Players has no input.
property loaded
readonly loaded: boolean;
If all the buffers are loaded or not
property mute
mute: boolean;
Mute the output.
property name
readonly name: string;
property output
readonly output: OutputNode;
The combined output of all of the players
property state
readonly state: BasicPlaybackState;
The state of the players object. Returns "started" if any of the players are playing.
property volume
readonly volume: Param<'decibels'>;
The volume of the output in decibels.
method add
add: ( name: string, url: string | ToneAudioBuffer | AudioBuffer, callback?: () => void) => this;
Add a player by name and url to the Players
Parameter name
A unique name to give the player
Parameter url
Either the url of the bufer or a buffer which will be added with the given name.
Parameter callback
The callback to invoke when the url is loaded.
Example 1
const players = new Tone.Players(); players.add("gong", "https://tonejs.github.io/audio/berklee/gong_1.mp3", () => { console.log("gong loaded"); players.player("gong").start(); });
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PlayersOptions;
method has
has: (name: string) => boolean;
True if the buffers object has a buffer by that name.
Parameter name
The key or index of the buffer.
method player
player: (name: string) => Player;
Get a player by name.
Parameter name
The players name as defined in the constructor object or
add
method.
method stopAll
stopAll: (time?: Time) => this;
Stop all of the players at the given time
Parameter time
The time to stop all of the players.
class PluckSynth
class PluckSynth extends Instrument<PluckSynthOptions> {}
Karplus-Strong string synthesis.
Example 1
const plucky = new Tone.PluckSynth().toDestination(); plucky.triggerAttack("C4", "+0.5"); plucky.triggerAttack("C3", "+1"); plucky.triggerAttack("C2", "+1.5"); plucky.triggerAttack("C1", "+2"); Instrument
constructor
constructor(options?: RecursivePartial<PluckSynthOptions>);
property attackNoise
attackNoise: number;
The amount of noise at the attack. Nominal range of [0.1, 20] 0.1 20
property dampening
dampening: Frequency;
The dampening control. i.e. the lowpass filter frequency of the comb filter 0 7000
property name
readonly name: string;
property release
release: Time;
The release time which corresponds to a resonance ramp down to 0
property resonance
resonance: number;
The amount of resonance of the pluck. Also correlates to the sustain duration.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => PluckSynthOptions;
method triggerAttack
triggerAttack: (note: Frequency, time?: Time) => this;
method triggerRelease
triggerRelease: (time?: Time) => this;
Ramp down the resonance to 0 over the duration of the release time.
class PolySynth
class PolySynth<Voice extends Monophonic<any> = Synth> extends Instrument< VoiceOptions<Voice>> {}
PolySynth handles voice creation and allocation for any instruments passed in as the second parameter. PolySynth is not a synthesizer by itself, it merely manages voices of one of the other types of synths, allowing any of the monophonic synthesizers to be polyphonic.
Example 1
const synth = new Tone.PolySynth().toDestination(); // set the attributes across all the voices using 'set' synth.set({ detune: -1200 }); // play a chord synth.triggerAttackRelease(["C4", "E4", "A4"], 1); Instrument
constructor
constructor( voice?: VoiceConstructor<Voice>, options?: RecursivePartial<OmitMonophonicOptions<VoiceOptions<Voice>>>);
Parameter voice
The constructor of the voices
Parameter options
The options object to set the synth voice
constructor
constructor(options?: Partial<PolySynthOptions<Voice>>);
property activeVoices
readonly activeVoices: number;
The number of active voices.
property maxPolyphony
maxPolyphony: number;
The polyphony limit.
property name
readonly name: string;
method dispose
dispose: () => this;
method get
get: () => VoiceOptions<Voice>;
method getDefaults
static getDefaults: () => PolySynthOptions<Synth>;
method releaseAll
releaseAll: (time?: Time) => this;
Trigger the release portion of all the currently active voices immediately. Useful for silencing the synth.
method set
set: (options: RecursivePartial<VoiceOptions<Voice>>) => this;
Set a member/attribute of the voices
Example 1
const poly = new Tone.PolySynth().toDestination(); // set all of the voices using an options object for the synth type poly.set({ envelope: { attack: 0.25 } }); poly.triggerAttackRelease("Bb3", 0.2);
method sync
sync: () => this;
method triggerAttack
triggerAttack: ( notes: Frequency | Frequency[], time?: Time, velocity?: NormalRange) => this;
Trigger the attack portion of the note
Parameter notes
The notes to play. Accepts a single Frequency or an array of frequencies.
Parameter time
The start time of the note.
Parameter velocity
The velocity of the note.
Example 1
const synth = new Tone.PolySynth(Tone.FMSynth).toDestination(); // trigger a chord immediately with a velocity of 0.2 synth.triggerAttack(["Ab3", "C4", "F5"], Tone.now(), 0.2);
method triggerAttackRelease
triggerAttackRelease: ( notes: Frequency | Frequency[], duration: Time | Time[], time?: Time, velocity?: NormalRange) => this;
Trigger the attack and release after the specified duration
Parameter notes
The notes to play. Accepts a single Frequency or an array of frequencies.
Parameter duration
the duration of the note
Parameter time
if no time is given, defaults to now
Parameter velocity
the velocity of the attack (0-1)
Example 1
const poly = new Tone.PolySynth(Tone.AMSynth).toDestination(); // can pass in an array of durations as well poly.triggerAttackRelease(["Eb3", "G4", "Bb4", "D5"], [4, 3, 2, 1]);
method triggerRelease
triggerRelease: (notes: Frequency | Frequency[], time?: Time) => this;
Trigger the release of the note. Unlike monophonic instruments, a note (or array of notes) needs to be passed in as the first argument.
Parameter notes
The notes to play. Accepts a single Frequency or an array of frequencies.
Parameter time
When the release will be triggered.
Example 1
const poly = new Tone.PolySynth(Tone.AMSynth).toDestination(); poly.triggerAttack(["Ab3", "C4", "F5"]); // trigger the release of the given notes. poly.triggerRelease(["Ab3", "C4"], "+1"); poly.triggerRelease("F5", "+3");
class Pow
class Pow extends SignalOperator<PowOptions> {}
Pow applies an exponent to the incoming signal. The incoming signal must be AudioRange [-1, 1]
Example 1
const pow = new Tone.Pow(2); const sig = new Tone.Signal(0.5).connect(pow); // output of pow is 0.25. Signal
constructor
constructor(value?: number);
Parameter value
Constant exponent value to use
constructor
constructor(options?: Partial<PowOptions>);
property input
input: WaveShaper;
property name
readonly name: string;
property output
output: WaveShaper;
property value
value: number;
The value of the exponent.
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => PowOptions;
class PulseOscillator
class PulseOscillator extends Source<PulseOscillatorOptions> implements ToneOscillatorInterface {}
PulseOscillator is an oscillator with control over pulse width, also known as the duty cycle. At 50% duty cycle (width = 0) the wave is a square wave. [Read more](https://wigglewave.wordpress.com/2014/08/16/pulse-waveforms-and-harmonics/).
width = -0.25 width = 0.0 width = 0.25+-----+ +-------+ + +-------+ +-+| | | | | | || | | | | | |+-+ +-------+ + +-------+ +-----+width = -0.5 width = 0.5+---+ +-------+ +---+| | | || | | |+---+ +-------+ +---+width = -0.75 width = 0.75+-+ +-------+ +-----+| | | || | | |+-----+ +-------+ +-+Example 1
return Tone.Offline(() => { const pulse = new Tone.PulseOscillator(50, 0.4).toDestination().start(); }, 0.1, 1); Source
constructor
constructor(frequency?: Frequency, width?: number);
Parameter frequency
The frequency of the oscillator
Parameter width
The width of the pulse
constructor
constructor(options?: Partial<PulseOscillatorOptions>);
property baseType
readonly baseType: string;
The baseType of the oscillator. Always returns "pulse".
property detune
readonly detune: Signal<'cents'>;
The detune in cents.
property frequency
readonly frequency: Signal<'frequency'>;
The frequency control.
property name
readonly name: string;
property partialCount
readonly partialCount: number;
No partials for this waveform type.
property partials
readonly partials: number[];
The partials of the waveform. Cannot set partials for this waveform type
property phase
phase: number;
The phase of the oscillator in degrees.
property type
readonly type: string;
The type of the oscillator. Always returns "pulse".
property width
readonly width: Signal<'audioRange'>;
The width of the pulse.
Example 1
return Tone.Offline(() => { const pulse = new Tone.PulseOscillator(20, 0.8).toDestination().start(); }, 0.1, 1);
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
Clean up method.
method getDefaults
static getDefaults: () => PulseOscillatorOptions;
class PWMOscillator
class PWMOscillator extends Source<PWMOscillatorOptions> implements ToneOscillatorInterface {}
PWMOscillator modulates the width of a Tone.PulseOscillator at the modulationFrequency. This has the effect of continuously changing the timbre of the oscillator by altering the harmonics generated.
Example 1
return Tone.Offline(() => { const pwm = new Tone.PWMOscillator(60, 0.3).toDestination().start(); }, 0.1, 1); Source
constructor
constructor(frequency?: Frequency, modulationFrequency?: Frequency);
Parameter frequency
The starting frequency of the oscillator.
Parameter modulationFrequency
The modulation frequency of the width of the pulse.
constructor
constructor(options?: Partial<PWMOscillatorOptions>);
property baseType
readonly baseType: string;
The baseType of the oscillator. Always returns "pwm".
property detune
readonly detune: Signal<'cents'>;
The detune of the oscillator.
property frequency
readonly frequency: Signal<'frequency'>;
The frequency control.
property modulationFrequency
readonly modulationFrequency: Signal<'frequency'>;
The width modulation rate of the oscillator.
Example 1
return Tone.Offline(() => { const osc = new Tone.PWMOscillator(20, 2).toDestination().start(); }, 0.1, 1);
property name
readonly name: string;
property partialCount
readonly partialCount: number;
No partials for this waveform type.
property partials
readonly partials: number[];
The partials of the waveform. Cannot set partials for this waveform type
property phase
phase: number;
The phase of the oscillator in degrees.
property sourceType
readonly sourceType: string;
property type
readonly type: string;
The type of the oscillator. Always returns "pwm".
method asArray
asArray: (length?: number) => Promise<Float32Array>;
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => PWMOscillatorOptions;
class Recorder
class Recorder extends ToneAudioNode<RecorderOptions> {}
A wrapper around the MediaRecorder API. Unlike the rest of Tone.js, this module does not offer any sample-accurate scheduling because it is not a feature of the MediaRecorder API. This is only natively supported in Chrome and Firefox. For a cross-browser shim, install (audio-recorder-polyfill)[https://www.npmjs.com/package/audio-recorder-polyfill].
Example 1
const recorder = new Tone.Recorder(); const synth = new Tone.Synth().connect(recorder); // start recording recorder.start(); // generate a few notes synth.triggerAttackRelease("C3", 0.5); synth.triggerAttackRelease("C4", 0.5, "+1"); synth.triggerAttackRelease("C5", 0.5, "+2"); // wait for the notes to end and stop the recording setTimeout(async () => { // the recorded audio is returned as a blob const recording = await recorder.stop(); // download the recording by creating an anchor element and blob url const url = URL.createObjectURL(recording); const anchor = document.createElement("a"); anchor.download = "recording.webm"; anchor.href = url; anchor.click(); }, 4000); Component
constructor
constructor(options?: Partial<RecorderOptions>);
property input
readonly input: Gain<'gain'>;
property mimeType
readonly mimeType: string;
The mime type is the format that the audio is encoded in. For Chrome that is typically webm encoded as "vorbis".
property name
readonly name: string;
property output
readonly output: undefined;
property state
readonly state: PlaybackState;
Get the playback state of the Recorder, either "started", "stopped" or "paused"
property supported
static readonly supported: boolean;
Test if your platform supports the Media Recorder API. If it's not available, try installing this (polyfill)[https://www.npmjs.com/package/audio-recorder-polyfill].
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => RecorderOptions;
method pause
pause: () => this;
Pause the recorder
method start
start: () => Promise<void>;
Start the Recorder. Returns a promise which resolves when the recorder has started.
method stop
stop: () => Promise<Blob>;
Stop the recorder. Returns a promise with the recorded content until this point encoded as mimeType
class Reverb
class Reverb extends Effect<ReverbOptions> {}
Simple convolution created with decaying noise. Generates an Impulse Response Buffer with Tone.Offline then feeds the IR into ConvolverNode. The impulse response generation is async, so you have to wait until ready resolves before it will make a sound.
Inspiration from [ReverbGen](https://github.com/adelespinasse/reverbGen). Copyright (c) 2014 Alan deLespinasse Apache 2.0 License.
Effect
constructor
constructor(decay?: number);
Parameter decay
The amount of time it will reverberate for.
constructor
constructor(options?: Partial<ReverbOptions>);
property decay
decay: Time;
The duration of the reverb.
property name
readonly name: string;
property preDelay
preDelay: Time;
The amount of time before the reverb is fully ramped in.
property ready
ready: Promise<void>;
method dispose
dispose: () => this;
method generate
generate: () => Promise<this>;
Generate the Impulse Response. Returns a promise while the IR is being generated. Promise which returns this object.
method getDefaults
static getDefaults: () => ReverbOptions;
class Sampler
class Sampler extends Instrument<SamplerOptions> {}
Pass in an object which maps the note's pitch or midi value to the url, then you can trigger the attack and release of that note like other instruments. By automatically repitching the samples, it is possible to play pitches which were not explicitly included which can save loading time.
For sample or buffer playback where repitching is not necessary, use Player.
Example 1
const sampler = new Tone.Sampler({ urls: { A1: "A1.mp3", A2: "A2.mp3", }, baseUrl: "https://tonejs.github.io/audio/casio/", onload: () => { sampler.triggerAttackRelease(["C1", "E1", "G1", "B1"], 0.5); } }).toDestination(); Instrument
constructor
constructor(samples?: SamplesMap, onload?: () => void, baseUrl?: string);
Parameter samples
An object of samples mapping either Midi Note Numbers or Scientific Pitch Notation to the url of that sample.
Parameter onload
The callback to invoke when all of the samples are loaded.
Parameter baseUrl
The root URL of all of the samples, which is prepended to all the URLs.
constructor
constructor( samples?: SamplesMap, options?: Partial<Omit<SamplerOptions, 'urls'>>);
Parameter samples
An object of samples mapping either Midi Note Numbers or Scientific Pitch Notation to the url of that sample.
Parameter options
The remaining options associated with the sampler
constructor
constructor(options?: Partial<SamplerOptions>);
property attack
attack: Time;
The envelope applied to the beginning of the sample. 0 1
property curve
curve: OneShotSourceCurve;
The shape of the attack/release curve. Either "linear" or "exponential"
property loaded
readonly loaded: boolean;
If the buffers are loaded or not
property name
readonly name: string;
property release
release: Time;
The envelope applied to the end of the envelope. 0 1
method add
add: ( note: Note | MidiNote, url: string | ToneAudioBuffer | AudioBuffer, callback?: () => void) => this;
Add a note to the sampler.
Parameter note
The buffer's pitch.
Parameter url
Either the url of the buffer, or a buffer which will be added with the given name.
Parameter callback
The callback to invoke when the url is loaded.
method dispose
dispose: () => this;
Clean up
method getDefaults
static getDefaults: () => SamplerOptions;
method releaseAll
releaseAll: (time?: Time) => this;
Release all currently active notes.
Parameter time
When to release the notes.
method sync
sync: () => this;
method triggerAttack
triggerAttack: ( notes: Frequency | Frequency[], time?: Time, velocity?: NormalRange) => this;
Parameter notes
The note to play, or an array of notes.
Parameter time
When to play the note
Parameter velocity
The velocity to play the sample back.
method triggerAttackRelease
triggerAttackRelease: ( notes: Frequency[] | Frequency, duration: Time | Time[], time?: Time, velocity?: NormalRange) => this;
Invoke the attack phase, then after the duration, invoke the release.
Parameter notes
The note to play and release, or an array of notes.
Parameter duration
The time the note should be held
Parameter time
When to start the attack
Parameter velocity
The velocity of the attack
method triggerRelease
triggerRelease: (notes: Frequency | Frequency[], time?: Time) => this;
Parameter notes
The note to release, or an array of notes.
Parameter time
When to release the note.
class Scale
class Scale< Options extends ScaleOptions = ScaleOptions> extends SignalOperator<Options> {}
Performs a linear scaling on an input signal. Scales a NormalRange input to between outputMin and outputMax.
Example 1
const scale = new Tone.Scale(50, 100); const signal = new Tone.Signal(0.5).connect(scale); // the output of scale equals 75 Signal
constructor
constructor(min?: number, max?: number);
Parameter min
The output value when the input is 0.
Parameter max
The output value when the input is 1.
constructor
constructor(options?: Partial<ScaleOptions>);
property input
input: InputNode;
property max
max: number;
The maximum output value. This number is output when the value input value is 1.
property min
min: number;
The minimum output value. This number is output when the value input value is 0.
property name
readonly name: string;
property output
output: OutputNode;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ScaleOptions;
class ScaleExp
class ScaleExp extends Scale<ScaleExpOptions> {}
Performs an exponential scaling on an input signal. Scales a NormalRange value [0,1] exponentially to the output range of outputMin to outputMax.
Example 1
const scaleExp = new Tone.ScaleExp(0, 100, 2); const signal = new Tone.Signal(0.5).connect(scaleExp); Signal
constructor
constructor(min?: number, max?: number, exponent?: number);
Parameter min
The output value when the input is 0.
Parameter max
The output value when the input is 1.
Parameter exponent
The exponent which scales the incoming signal.
constructor
constructor(options?: Partial<ScaleExpOptions>);
property exponent
exponent: number;
property name
readonly name: string;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ScaleExpOptions;
class Sequence
class Sequence<ValueType = any> extends ToneEvent<ValueType> {}
A sequence is an alternate notation of a part. Instead of passing in an array of [time, event] pairs, pass in an array of events which will be spaced at the given subdivision. Sub-arrays will subdivide that beat by the number of items are in the array. Sequence notation inspiration from [Tidal Cycles](http://tidalcycles.org/)
Example 1
const synth = new Tone.Synth().toDestination(); const seq = new Tone.Sequence((time, note) => { synth.triggerAttackRelease(note, 0.1, time); // subdivisions are given as subarrays }, ["C4", ["E4", "D4", "E4"], "G4", ["A4", "G4"]]).start(0); Tone.Transport.start(); Event
constructor
constructor( callback?: ToneEventCallback<ValueType>, events?: SequenceEventDescription<ValueType>, subdivision?: Time);
Parameter callback
The callback to invoke with every note
Parameter events
The sequence of events
Parameter subdivision
The subdivision between which events are placed.
constructor
constructor(options?: Partial<SequenceOptions<ValueType>>);
property events
events: any[];
The sequence
property humanize
humanize: boolean | Time;
property length
readonly length: number;
The number of scheduled events
property loop
loop: number | boolean;
property loopEnd
loopEnd: number;
The index at which the sequence should end looping
property loopStart
loopStart: number;
The index at which the sequence should start looping
property name
readonly name: string;
property playbackRate
playbackRate: number;
property probability
probability: number;
property progress
readonly progress: number;
property startOffset
startOffset: number;
property subdivision
readonly subdivision: number;
The subdivision of the sequence. This can only be set in the constructor. The subdivision is the interval between successive steps.
method clear
clear: () => this;
Clear all of the events
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => SequenceOptions<any>;
method start
start: (time?: TransportTime, offset?: number) => this;
Start the part at the given time.
Parameter time
When to start the part.
Parameter offset
The offset index to start at
method stop
stop: (time?: TransportTime) => this;
Stop the part at the given time.
Parameter time
When to stop the part.
class Signal
class Signal<TypeName extends UnitName = 'number'> extends ToneAudioNode<SignalOptions<any>> implements AbstractParam<TypeName> {}
A signal is an audio-rate value. Tone.Signal is a core component of the library. Unlike a number, Signals can be scheduled with sample-level accuracy. Tone.Signal has all of the methods available to native Web Audio [AudioParam](http://webaudio.github.io/web-audio-api/#the-audioparam-interface) as well as additional conveniences. Read more about working with signals [here](https://github.com/Tonejs/Tone.js/wiki/Signals).
Example 1
const osc = new Tone.Oscillator().toDestination().start(); // a scheduleable signal which can be connected to control an AudioParam or another Signal const signal = new Tone.Signal({ value: "C4", units: "frequency" }).connect(osc.frequency); // the scheduled ramp controls the connected signal signal.rampTo("C2", 4, "+0.5"); Signal
constructor
constructor(value?: string | number | TimeObject, units?: keyof UnitMap);
Parameter value
Initial value of the signal
Parameter units
The unit name, e.g. "frequency"
constructor
constructor(options?: Partial<SignalOptions<TypeName>>);
property convert
convert: boolean;
property input
readonly input: InputNode;
property maxValue
readonly maxValue: number;
property minValue
readonly minValue: number;
property name
readonly name: string;
property output
readonly output: OutputNode;
property overridden
overridden: boolean;
property override
readonly override: boolean;
Indicates if the value should be overridden on connection.
property units
readonly units: keyof UnitMap;
property value
value: string | number | TimeObject;
method apply
apply: (param: Param | AudioParam) => this;
See Also
method cancelAndHoldAtTime
cancelAndHoldAtTime: (time: Time) => this;
method cancelScheduledValues
cancelScheduledValues: (time: Time) => this;
method connect
connect: (destination: InputNode, outputNum?: number, inputNum?: number) => this;
method dispose
dispose: () => this;
method exponentialApproachValueAtTime
exponentialApproachValueAtTime: ( value: UnitMap[TypeName], time: Time, rampTime: Time) => this;
method exponentialRampTo
exponentialRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
method exponentialRampToValueAtTime
exponentialRampToValueAtTime: (value: UnitMap[TypeName], time: Time) => this;
method getDefaults
static getDefaults: () => SignalOptions<any>;
method getValueAtTime
getValueAtTime: (time: Time) => UnitMap[TypeName];
method linearRampTo
linearRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
method linearRampToValueAtTime
linearRampToValueAtTime: (value: UnitMap[TypeName], time: Time) => this;
method rampTo
rampTo: (value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
method setRampPoint
setRampPoint: (time: Time) => this;
method setTargetAtTime
setTargetAtTime: ( value: UnitMap[TypeName], startTime: Time, timeConstant: number) => this;
method setValueAtTime
setValueAtTime: (value: UnitMap[TypeName], time: Time) => this;
method setValueCurveAtTime
setValueCurveAtTime: ( values: UnitMap[TypeName][], startTime: Time, duration: Time, scaling?: number) => this;
method targetRampTo
targetRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: Time) => this;
class Solo
class Solo extends ToneAudioNode<SoloOptions> {}
Solo lets you isolate a specific audio stream. When an instance is set to
solo=true
, it will mute all other instances of Solo.Example 1
const soloA = new Tone.Solo().toDestination(); const oscA = new Tone.Oscillator("C4", "sawtooth").connect(soloA); const soloB = new Tone.Solo().toDestination(); const oscB = new Tone.Oscillator("E4", "square").connect(soloB); soloA.solo = true; // no audio will pass through soloB Component
constructor
constructor(solo?: boolean);
Parameter solo
If the connection should be initially solo'ed.
constructor
constructor(options?: Partial<SoloOptions>);
property input
readonly input: Gain<'gain'>;
property muted
readonly muted: boolean;
If the current instance is muted, i.e. another instance is soloed
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
property solo
solo: boolean;
Isolates this instance and mutes all other instances of Solo. Only one instance can be soloed at a time. A soloed instance will report
solo=false
when another instance is soloed.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => SoloOptions;
class Split
class Split extends ToneAudioNode<SplitOptions> {}
Split splits an incoming signal into the number of given channels.
Example 1
const split = new Tone.Split(); // stereoSignal.connect(split); Component
constructor
constructor(channels?: number);
Parameter channels
The number of channels to merge.
constructor
constructor(options?: Partial<SplitOptions>);
property input
readonly input: ChannelSplitterNode;
property name
readonly name: string;
property output
readonly output: ChannelSplitterNode;
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => SplitOptions;
class StereoWidener
class StereoWidener extends MidSideEffect<StereoWidenerOptions> {}
Applies a width factor to the mid/side seperation. 0 is all mid and 1 is all side. Algorithm found in [kvraudio forums](http://www.kvraudio.com/forum/viewtopic.php?t=212587).
Mid *= 2*(1-width)<br>Side *= 2*widthEffect
constructor
constructor(width?: number);
Parameter width
The stereo width. A width of 0 is mono and 1 is stereo. 0.5 is no change.
constructor
constructor(options?: Partial<StereoWidenerOptions>);
property name
readonly name: string;
property width
readonly width: Signal<'normalRange'>;
The width control. 0 = 100% mid. 1 = 100% side. 0.5 = no change.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => StereoWidenerOptions;
class Subtract
class Subtract extends Signal {}
Subtract the signal connected to the input is subtracted from the signal connected The subtrahend.
Example 1
// subtract a scalar from a signal const sub = new Tone.Subtract(1); const sig = new Tone.Signal(4).connect(sub); // the output of sub is 3.
Example 2
// subtract two signals const sub = new Tone.Subtract(); const sigA = new Tone.Signal(10); const sigB = new Tone.Signal(2.5); sigA.connect(sub); sigB.connect(sub.subtrahend); // output of sub is 7.5 Signal
constructor
constructor(value?: number);
Parameter value
The value to subtract from the incoming signal. If the value is omitted, it will subtract the second signal from the first.
constructor
constructor(options?: Partial<SignalOptions<'number'>>);
property input
readonly input: Gain<'gain'>;
property name
readonly name: string;
property output
readonly output: Gain<'gain'>;
property override
override: boolean;
property subtrahend
subtrahend: Param<'number'>;
The value which is subtracted from the main signal
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => SignalOptions<'number'>;
class SyncedSignal
class SyncedSignal<TypeName extends UnitName = 'number'> extends Signal<TypeName> {}
Adds the ability to synchronize the signal to the TransportClass Signal
constructor
constructor(value?: string | number | TimeObject, units?: keyof UnitMap);
Parameter value
Initial value of the signal
Parameter units
The unit name, e.g. "frequency"
constructor
constructor(options?: Partial<SignalOptions<TypeName>>);
property name
readonly name: string;
property output
readonly output: OutputNode;
property override
readonly override: boolean;
Don't override when something is connected to the input
method cancelAndHoldAtTime
cancelAndHoldAtTime: (time: TransportTime) => this;
method cancelScheduledValues
cancelScheduledValues: (startTime: TransportTime) => this;
method dispose
dispose: () => this;
method exponentialRampTo
exponentialRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: TransportTime) => this;
method exponentialRampToValueAtTime
exponentialRampToValueAtTime: ( value: UnitMap[TypeName], time: TransportTime) => this;
method getValueAtTime
getValueAtTime: (time: TransportTime) => UnitMap[TypeName];
method linearRampTo
linearRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: TransportTime) => this;
method linearRampToValueAtTime
linearRampToValueAtTime: (value: UnitMap[TypeName], time: TransportTime) => this;
method setRampPoint
setRampPoint: (time: TransportTime) => this;
method setTargetAtTime
setTargetAtTime: ( value: any, startTime: TransportTime, timeConstant: number) => this;
method setValueAtTime
setValueAtTime: (value: UnitMap[TypeName], time: TransportTime) => this;
method setValueCurveAtTime
setValueCurveAtTime: ( values: UnitMap[TypeName][], startTime: TransportTime, duration: Time, scaling: NormalRange) => this;
method targetRampTo
targetRampTo: ( value: UnitMap[TypeName], rampTime: Time, startTime?: TransportTime) => this;
class Synth
class Synth< Options extends SynthOptions = SynthOptions> extends Monophonic<Options> {}
Synth is composed simply of a OmniOscillator routed through an AmplitudeEnvelope.
+----------------+ +-------------------+| OmniOscillator +>--> AmplitudeEnvelope +>--> Output+----------------+ +-------------------+Example 1
const synth = new Tone.Synth().toDestination(); synth.triggerAttackRelease("C4", "8n"); Instrument
constructor
constructor(options?: RecursivePartial<SynthOptions>);
Parameter options
the options available for the synth.
property detune
readonly detune: Signal<'cents'>;
The detune signal
property envelope
readonly envelope: AmplitudeEnvelope;
The envelope
property frequency
readonly frequency: Signal<'frequency'>;
The frequency signal
property name
readonly name: string;
property oscillator
readonly oscillator: OmniOscillator<any>;
The oscillator.
method dispose
dispose: () => this;
clean up
method getDefaults
static getDefaults: () => SynthOptions;
method getLevelAtTime
getLevelAtTime: (time: Time) => NormalRange;
class TicksClass
class TicksClass extends TransportTimeClass<Ticks> {}
Ticks is a primitive type for encoding Time values. Ticks can be constructed with or without the
new
keyword. Ticks can be passed into the parameter of any method which takes time as an argument.Example 1
const t = Tone.Ticks("4n"); // a quarter note as ticks Unit
property defaultUnits
readonly defaultUnits: TimeBaseUnit;
property name
readonly name: string;
method toSeconds
toSeconds: () => Seconds;
Return the time in seconds
method toTicks
toTicks: () => Ticks;
Return the time in ticks
class TimeClass
class TimeClass< Type extends Seconds | Ticks = Seconds, Unit extends string = TimeBaseUnit> extends TimeBaseClass<Type, Unit> {}
TimeClass is a primitive type for encoding and decoding Time values. TimeClass can be passed into the parameter of any method which takes time as an argument.
Parameter val
The time value.
Parameter units
The units of the value.
Example 1
const time = Tone.Time("4n"); // a quarter note Unit
property name
readonly name: string;
method quantize
quantize: (subdiv: Time, percent?: number) => Type;
Quantize the time by the given subdivision. Optionally add a percentage which will move the time value towards the ideal quantized value by that percentage.
Parameter subdiv
The subdivision to quantize to
Parameter percent
Move the time value towards the quantized value by a percentage.
Example 1
Tone.Time(21).quantize(2); // returns 22 Tone.Time(0.6).quantize("4n", 0.5); // returns 0.55
method toBarsBeatsSixteenths
toBarsBeatsSixteenths: () => BarsBeatsSixteenths;
Return the time encoded as Bars:Beats:Sixteenths.
method toMidi
toMidi: () => MidiNote;
Return the value as a midi note.
method toNotation
toNotation: () => Subdivision;
Convert a Time to Notation. The notation values are will be the closest representation between 1m to 128th note. {Notation}
Example 1
// if the Transport is at 120bpm: Tone.Time(2).toNotation(); // returns "1m"
method toSeconds
toSeconds: () => Seconds;
Return the time in seconds.
method toTicks
toTicks: () => Ticks;
Return the time in ticks.
class ToneAudioBuffer
class ToneAudioBuffer extends Tone {}
AudioBuffer loading and storage. ToneAudioBuffer is used internally by all classes that make requests for audio files such as Tone.Player, Tone.Sampler and Tone.Convolver.
Example 1
const buffer = new Tone.ToneAudioBuffer("https://tonejs.github.io/audio/casio/A1.mp3", () => { console.log("loaded"); }); Core
constructor
constructor( url?: string | ToneAudioBuffer | AudioBuffer, onload?: (buffer: ToneAudioBuffer) => void, onerror?: (error: Error) => void);
Parameter url
The url to load, or the audio buffer to set.
Parameter onload
A callback which is invoked after the buffer is loaded. It's recommended to use
ToneAudioBuffer.on('load', callback)
instead since it will give you a callback when _all_ buffers are loaded.Parameter onerror
The callback to invoke if there is an error
constructor
constructor(options?: Partial<ToneAudioBufferOptions>);
property baseUrl
static baseUrl: string;
A path which is prefixed before every url.
property downloads
static downloads: Promise<void>[];
All of the downloads
property duration
readonly duration: number;
The duration of the buffer in seconds.
property length
readonly length: number;
The length of the buffer in samples
property loaded
readonly loaded: boolean;
If the buffer is loaded or not
property name
readonly name: string;
property numberOfChannels
readonly numberOfChannels: number;
The number of discrete audio channels. Returns 0 if no buffer is loaded.
property onload
onload: (buffer: ToneAudioBuffer) => void;
Callback when the buffer is loaded.
property reverse
reverse: boolean;
Reverse the buffer.
property sampleRate
readonly sampleRate: number;
The sample rate of the AudioBuffer
method dispose
dispose: () => this;
clean up
method fromArray
static fromArray: (array: Float32Array | Float32Array[]) => ToneAudioBuffer;
Set the audio buffer from the array. To create a multichannel AudioBuffer, pass in a multidimensional array.
Parameter array
The array to fill the audio buffer
Create a ToneAudioBuffer from the array. To create a multichannel AudioBuffer, pass in a multidimensional array.
Parameter array
The array to fill the audio buffer A ToneAudioBuffer created from the array
method fromUrl
static fromUrl: (url: string) => Promise<ToneAudioBuffer>;
Creates a ToneAudioBuffer from a URL, returns a promise which resolves to a ToneAudioBuffer
Parameter url
The url to load. A promise which resolves to a ToneAudioBuffer
method get
get: () => AudioBuffer | undefined;
The audio buffer stored in the object.
method getChannelData
getChannelData: (channel: number) => Float32Array;
Returns the Float32Array representing the PCM audio data for the specific channel.
Parameter channel
The channel number to return The audio as a TypedArray
method getDefaults
static getDefaults: () => ToneAudioBufferOptions;
method load
static load: (url: string) => Promise<AudioBuffer>;
Makes an fetch request for the selected url then decodes the file as an audio buffer. Invokes the callback once the audio buffer loads.
Parameter url
The url of the buffer to load. filetype support depends on the browser.
Returns
A Promise which resolves with this ToneAudioBuffer
Loads a url using fetch and returns the AudioBuffer.
method loaded
static loaded: () => Promise<void>;
Returns a Promise which resolves when all of the buffers have loaded
method set
set: (buffer: AudioBuffer | ToneAudioBuffer) => this;
Pass in an AudioBuffer or ToneAudioBuffer to set the value of this buffer.
method slice
slice: (start: Seconds, end?: Seconds) => ToneAudioBuffer;
Cut a subsection of the array and return a buffer of the subsection. Does not modify the original buffer
Parameter start
The time to start the slice
Parameter end
The end time to slice. If none is given will default to the end of the buffer
method supportsType
static supportsType: (url: string) => boolean;
Checks a url's extension to see if the current browser can play that file type.
Parameter url
The url/extension to test If the file extension can be played
Example 1
Tone.ToneAudioBuffer.supportsType("wav"); // returns true Tone.ToneAudioBuffer.supportsType("path/to/file.wav"); // returns true
method toArray
toArray: (channel?: number) => Float32Array | Float32Array[];
Get the buffer as an array. Single channel buffers will return a 1-dimensional Float32Array, and multichannel buffers will return multidimensional arrays.
Parameter channel
Optionally only copy a single channel from the array.
method toMono
toMono: (chanNum?: number) => this;
Sums multiple channels into 1 channel
Parameter chanNum
Optionally only copy a single channel from the array.
class ToneAudioBuffers
class ToneAudioBuffers extends Tone {}
A data structure for holding multiple buffers in a Map-like datastructure.
Example 1
const pianoSamples = new Tone.ToneAudioBuffers({ A1: "https://tonejs.github.io/audio/casio/A1.mp3", A2: "https://tonejs.github.io/audio/casio/A2.mp3", }, () => { const player = new Tone.Player().toDestination(); // play one of the samples when they all load player.buffer = pianoSamples.get("A2"); player.start(); });
Example 2
// To pass in additional parameters in the second parameter const buffers = new Tone.ToneAudioBuffers({ urls: { A1: "A1.mp3", A2: "A2.mp3", }, onload: () => console.log("loaded"), baseUrl: "https://tonejs.github.io/audio/casio/" }); Core
constructor
constructor( urls?: ToneAudioBuffersUrlMap, onload?: () => void, baseUrl?: string);
Parameter urls
An object literal or array of urls to load.
Parameter onload
The callback to invoke when the buffers are loaded.
Parameter baseUrl
A prefix url to add before all the urls
constructor
constructor(options?: Partial<ToneAudioBuffersOptions>);
property baseUrl
baseUrl: string;
A path which is prefixed before every url.
property loaded
readonly loaded: boolean;
If the buffers are loaded or not
property name
readonly name: string;
method add
add: ( name: string | number, url: string | AudioBuffer | ToneAudioBuffer, callback?: () => void, onerror?: (e: Error) => void) => this;
Add a buffer by name and url to the Buffers
Parameter name
A unique name to give the buffer
Parameter url
Either the url of the bufer, or a buffer which will be added with the given name.
Parameter callback
The callback to invoke when the url is loaded.
Parameter onerror
Invoked if the buffer can't be loaded
method dispose
dispose: () => this;
method get
get: (name: string | number) => ToneAudioBuffer;
Get a buffer by name. If an array was loaded, then use the array index.
Parameter name
The key or index of the buffer.
method getDefaults
static getDefaults: () => ToneAudioBuffersOptions;
method has
has: (name: string | number) => boolean;
True if the buffers object has a buffer by that name.
Parameter name
The key or index of the buffer.
class ToneAudioNode
abstract class ToneAudioNode< Options extends ToneAudioNodeOptions = ToneAudioNodeOptions> extends ToneWithContext<Options> {}
ToneAudioNode is the base class for classes which process audio. Core
property channelCount
channelCount: number;
channelCount is the number of channels used when up-mixing and down-mixing connections to any inputs to the node. The default value is 2 except for specific nodes where its value is specially determined.
property channelCountMode
channelCountMode: ChannelCountMode;
channelCountMode determines how channels will be counted when up-mixing and down-mixing connections to any inputs to the node. The default value is "max". This attribute has no effect for nodes with no inputs. * "max" - computedNumberOfChannels is the maximum of the number of channels of all connections to an input. In this mode channelCount is ignored. * "clamped-max" - computedNumberOfChannels is determined as for "max" and then clamped to a maximum value of the given channelCount. * "explicit" - computedNumberOfChannels is the exact value as specified by the channelCount.
property channelInterpretation
channelInterpretation: ChannelInterpretation;
channelInterpretation determines how individual channels will be treated when up-mixing and down-mixing connections to any inputs to the node. The default value is "speakers".
property input
abstract input: InputNode;
The input node or nodes. If the object is a source, it does not have any input and this.input is undefined.
property name
abstract readonly name: string;
The name of the class
property numberOfInputs
readonly numberOfInputs: number;
The number of inputs feeding into the AudioNode. For source nodes, this will be 0.
Example 1
const node = new Tone.Gain(); console.log(node.numberOfInputs);
property numberOfOutputs
readonly numberOfOutputs: number;
The number of outputs of the AudioNode.
Example 1
const node = new Tone.Gain(); console.log(node.numberOfOutputs);
property output
abstract output: OutputNode;
The output nodes. If the object is a sink, it does not have any output and this.output is undefined.
method chain
chain: (...nodes: InputNode[]) => this;
Connect the output of this node to the rest of the nodes in series.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/handdrum-loop.mp3"); player.autostart = true; const filter = new Tone.AutoFilter(4).start(); const distortion = new Tone.Distortion(0.5); // connect the player to the filter, distortion and then to the master output player.chain(filter, distortion, Tone.Destination);
method connect
connect: (destination: InputNode, outputNum?: number, inputNum?: number) => this;
connect the output of a ToneAudioNode to an AudioParam, AudioNode, or ToneAudioNode
Parameter destination
The output to connect to
Parameter outputNum
The output to connect from
Parameter inputNum
The input to connect to
method disconnect
disconnect: ( destination?: InputNode, outputNum?: number, inputNum?: number) => this;
disconnect the output
method dispose
dispose: () => this;
Dispose and disconnect
method fan
fan: (...nodes: InputNode[]) => this;
connect the output of this node to the rest of the nodes in parallel.
Example 1
const player = new Tone.Player("https://tonejs.github.io/audio/drum-samples/conga-rhythm.mp3"); player.autostart = true; const pitchShift = new Tone.PitchShift(4).toDestination(); const filter = new Tone.Filter("G5").toDestination(); // connect a node to the pitch shift and filter in parallel player.fan(pitchShift, filter);
method toDestination
toDestination: () => this;
Connect the output to the context's destination node.
Example 1
const osc = new Tone.Oscillator("C2").start(); osc.toDestination();
method toMaster
toMaster: () => this;
class ToneBufferSource
class ToneBufferSource extends OneShotSource<ToneBufferSourceOptions> {}
Wrapper around the native BufferSourceNode. Source
constructor
constructor(url?: string | ToneAudioBuffer | AudioBuffer, onload?: () => void);
Parameter url
The buffer to play or url to load
Parameter onload
The callback to invoke when the buffer is done playing.
constructor
constructor(options?: Partial<ToneBufferSourceOptions>);
property buffer
buffer: ToneAudioBuffer;
The audio buffer belonging to the player.
property curve
curve: OneShotSourceCurve;
The curve applied to the fades, either "linear" or "exponential"
property fadeIn
fadeIn: Time;
The fadeIn time of the amplitude envelope.
property fadeOut
fadeOut: Time;
The fadeOut time of the amplitude envelope.
property loop
loop: boolean;
If the buffer should loop once it's over.
property loopEnd
loopEnd: Time;
If loop is true, the loop will end at this position.
property loopStart
loopStart: Time;
If loop is true, the loop will start at this position.
property name
readonly name: string;
property playbackRate
readonly playbackRate: Param<'positive'>;
The frequency of the oscillator
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => ToneBufferSourceOptions;
method start
start: (time?: Time, offset?: Time, duration?: Time, gain?: GainFactor) => this;
Start the buffer
Parameter time
When the player should start.
Parameter offset
The offset from the beginning of the sample to start at.
Parameter duration
How long the sample should play. If no duration is given, it will default to the full length of the sample (minus any offset)
Parameter gain
The gain to play the buffer back at.
class ToneEvent
class ToneEvent<ValueType = any> extends ToneWithContext< ToneEventOptions<ValueType>> {}
ToneEvent abstracts away this.context.transport.schedule and provides a schedulable callback for a single or repeatable events along the timeline.
Example 1
const synth = new Tone.PolySynth().toDestination(); const chordEvent = new Tone.ToneEvent(((time, chord) => { // the chord as well as the exact time of the event // are passed in as arguments to the callback function synth.triggerAttackRelease(chord, 0.5, time); }), ["D4", "E4", "F4"]); // start the chord at the beginning of the transport timeline chordEvent.start(); // loop it every measure for 8 measures chordEvent.loop = 8; chordEvent.loopEnd = "1m"; Event
constructor
constructor(callback?: ToneEventCallback<ValueType>, value?: {});
Parameter callback
The callback to invoke at the time.
Parameter value
The value or values which should be passed to the callback function on invocation.
constructor
constructor(options?: Partial<ToneEventOptions<ValueType>>);
property callback
callback: ToneEventCallback<ValueType>;
The callback to invoke.
property humanize
humanize: boolean | Time;
If set to true, will apply small random variation to the callback time. If the value is given as a time, it will randomize by that amount.
Example 1
const event = new Tone.ToneEvent(); event.humanize = true;
property loop
loop: number | boolean;
If the note should loop or not between ToneEvent.loopStart and ToneEvent.loopEnd. If set to true, the event will loop indefinitely, if set to a number greater than 1 it will play a specific number of times, if set to false, 0 or 1, the part will only play once.
property loopEnd
loopEnd: Time;
The loopEnd point is the time the event will loop if ToneEvent.loop is true.
property loopStart
loopStart: Time;
The time when the loop should start.
property mute
mute: boolean;
If mute is true, the callback won't be invoked.
property name
readonly name: string;
property playbackRate
playbackRate: number;
The playback rate of the event. Defaults to 1.
Example 1
const note = new Tone.ToneEvent(); note.loop = true; // repeat the note twice as fast note.playbackRate = 2;
property probability
probability: number;
The probability of the notes being triggered.
property progress
readonly progress: number;
The current progress of the loop interval. Returns 0 if the event is not started yet or it is not set to loop.
property startOffset
startOffset: number;
The start from the scheduled start time.
property state
readonly state: BasicPlaybackState;
Returns the playback state of the note, either "started" or "stopped".
property value
value: {};
The value which is passed to the callback function.
method cancel
cancel: (time?: TransportTime | TransportTimeClass) => this;
Cancel all scheduled events greater than or equal to the given time
Parameter time
The time after which events will be cancel.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => ToneEventOptions<any>;
method start
start: (time?: TransportTime | TransportTimeClass) => this;
Start the note at the given time.
Parameter time
When the event should start.
method stop
stop: (time?: TransportTime | TransportTimeClass) => this;
Stop the Event at the given time.
Parameter time
When the event should stop.
class ToneOscillatorNode
class ToneOscillatorNode extends OneShotSource<ToneOscillatorNodeOptions> {}
Wrapper around the native fire-and-forget OscillatorNode. Adds the ability to reschedule the stop method. ***Oscillator is better for most use-cases*** Source
constructor
constructor(frequency: Frequency, type: OscillatorType);
Parameter frequency
The frequency value
Parameter type
The basic oscillator type
constructor
constructor(options?: Partial<ToneOscillatorNodeOptions>);
property detune
readonly detune: Param<'cents'>;
The detune of the oscillator
property frequency
readonly frequency: Param<'frequency'>;
The frequency of the oscillator
property name
readonly name: string;
property type
type: OscillatorType;
The oscillator type. Either 'sine', 'sawtooth', 'square', or 'triangle'
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => ToneOscillatorNodeOptions;
method setPeriodicWave
setPeriodicWave: (periodicWave: PeriodicWave) => this;
Sets an arbitrary custom periodic waveform given a PeriodicWave.
Parameter periodicWave
PeriodicWave should be created with context.createPeriodicWave
method start
start: (time?: Time) => this;
Start the oscillator node at the given time
Parameter time
When to start the oscillator
class TransportTimeClass
class TransportTimeClass< Type extends Seconds | Ticks = Seconds> extends TimeClass<Type> {}
TransportTime is a time along the Transport's timeline. It is similar to Tone.Time, but instead of evaluating against the AudioContext's clock, it is evaluated against the Transport's position. See [TransportTime wiki](https://github.com/Tonejs/Tone.js/wiki/TransportTime). Unit
property name
readonly name: string;
class Tremolo
class Tremolo extends StereoEffect<TremoloOptions> {}
Tremolo modulates the amplitude of an incoming signal using an LFO. The effect is a stereo effect where the modulation phase is inverted in each channel.
Example 1
// create a tremolo and start it's LFO const tremolo = new Tone.Tremolo(9, 0.75).toDestination().start(); // route an oscillator through the tremolo and start it const oscillator = new Tone.Oscillator().connect(tremolo).start();
Effect
constructor
constructor(frequency?: Frequency, depth?: number);
Parameter frequency
The rate of the effect.
Parameter depth
The depth of the effect.
constructor
constructor(options?: Partial<TremoloOptions>);
property depth
readonly depth: Signal<'normalRange'>;
The depth of the effect. A depth of 0, has no effect on the amplitude, and a depth of 1 makes the amplitude modulate fully between 0 and 1.
property frequency
readonly frequency: Signal<'frequency'>;
The frequency of the tremolo.
property name
readonly name: string;
property spread
spread: number;
Amount of stereo spread. When set to 0, both LFO's will be panned centrally. When set to 180, LFO's will be panned hard left and right respectively.
property type
type: ToneOscillatorType;
The oscillator type.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => TremoloOptions;
method start
start: (time?: Time) => this;
Start the tremolo.
method stop
stop: (time?: Time) => this;
Stop the tremolo.
method sync
sync: () => this;
Sync the effect to the transport.
method unsync
unsync: () => this;
Unsync the filter from the transport
class UserMedia
class UserMedia extends ToneAudioNode<UserMediaOptions> {}
UserMedia uses MediaDevices.getUserMedia to open up and external microphone or audio input. Check [MediaDevices API Support](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) to see which browsers are supported. Access to an external input is limited to secure (HTTPS) connections.
Example 1
const meter = new Tone.Meter(); const mic = new Tone.UserMedia().connect(meter); mic.open().then(() => { // promise resolves when input is available console.log("mic open"); // print the incoming mic levels in decibels setInterval(() => console.log(meter.getValue()), 100); }).catch(e => { // promise is rejected when the user doesn't have or allow mic access console.log("mic not open"); }); Source
constructor
constructor(volume?: number);
Parameter volume
The level of the input in decibels
constructor
constructor(options?: Partial<UserMediaOptions>);
property deviceId
readonly deviceId: string;
Returns an identifier for the represented device that is persisted across sessions. It is un-guessable by other applications and unique to the origin of the calling application. It is reset when the user clears cookies (for Private Browsing, a different identifier is used that is not persisted across sessions). Returns undefined when the device is not open.
property groupId
readonly groupId: string;
Returns a group identifier. Two devices have the same group identifier if they belong to the same physical device. Returns null when the device is not open.
property input
readonly input: undefined;
property label
readonly label: string;
Returns a label describing this device (for example "Built-in Microphone"). Returns undefined when the device is not open or label is not available because of permissions.
property mute
mute: boolean;
Mute the output.
Example 1
const mic = new Tone.UserMedia(); mic.open().then(() => { // promise resolves when input is available }); // mute the output mic.mute = true;
property name
readonly name: string;
property output
readonly output: OutputNode;
property state
readonly state: 'started' | 'stopped';
Returns the playback state of the source, "started" when the microphone is open and "stopped" when the mic is closed.
property supported
static readonly supported: boolean;
If getUserMedia is supported by the browser.
property volume
readonly volume: Param<'decibels'>;
The volume of the output in decibels.
method close
close: () => this;
Close the media stream
method dispose
dispose: () => this;
method enumerateDevices
static enumerateDevices: () => Promise<MediaDeviceInfo[]>;
Returns a promise which resolves with the list of audio input devices available. The promise that is resolved with the devices
Example 1
Tone.UserMedia.enumerateDevices().then((devices) => { // print the device labels console.log(devices.map(device => device.label)); });
method getDefaults
static getDefaults: () => UserMediaOptions;
method open
open: (labelOrId?: string | number) => Promise<this>;
Open the media stream. If a string is passed in, it is assumed to be the label or id of the stream, if a number is passed in, it is the input number of the stream.
Parameter labelOrId
The label or id of the audio input media device. With no argument, the default stream is opened. The promise is resolved when the stream is open.
class Vibrato
class Vibrato extends Effect<VibratoOptions> {}
A Vibrato effect composed of a Tone.Delay and a Tone.LFO. The LFO modulates the delayTime of the delay, causing the pitch to rise and fall. Effect
constructor
constructor(frequency?: Frequency, depth?: number);
Parameter frequency
The frequency of the vibrato.
Parameter depth
The amount the pitch is modulated.
constructor
constructor(options?: Partial<VibratoOptions>);
property depth
readonly depth: Param<'normalRange'>;
The depth of the vibrato.
property frequency
readonly frequency: Signal<'frequency'>;
The frequency of the vibrato
property name
readonly name: string;
property type
type: ToneOscillatorType;
Type of oscillator attached to the Vibrato.
method dispose
dispose: () => this;
method getDefaults
static getDefaults: () => VibratoOptions;
class Volume
class Volume extends ToneAudioNode<VolumeOptions> {}
Volume is a simple volume node, useful for creating a volume fader.
Example 1
const vol = new Tone.Volume(-12).toDestination(); const osc = new Tone.Oscillator().connect(vol).start(); Component
constructor
constructor(volume?: number);
Parameter volume
the initial volume in decibels
constructor
constructor(options?: Partial<VolumeOptions>);
property input
input: Gain<'decibels'>;
Input and output are the same
property mute
mute: boolean;
Mute the output.
Example 1
const vol = new Tone.Volume(-12).toDestination(); const osc = new Tone.Oscillator().connect(vol).start(); // mute the output vol.mute = true;
property name
readonly name: string;
property output
output: Gain<'decibels'>;
the output node
property volume
volume: Param<'decibels'>;
The volume control in decibels.
Example 1
const vol = new Tone.Volume().toDestination(); const osc = new Tone.Oscillator().connect(vol).start(); vol.volume.value = -20;
method dispose
dispose: () => this;
clean up
method getDefaults
static getDefaults: () => VolumeOptions;
class Waveform
class Waveform extends MeterBase<WaveformOptions> {}
Get the current waveform data of the connected audio source. Component
constructor
constructor(size?: number);
Parameter size
The size of the Waveform. Value must be a power of two in the range 16 to 16384.
constructor
constructor(options?: Partial<WaveformOptions>);
property name
readonly name: string;
property size
size: number;
The size of analysis. This must be a power of two in the range 16 to 16384. Determines the size of the array returned by getValue.
method getDefaults
static getDefaults: () => WaveformOptions;
method getValue
getValue: () => Float32Array;
Return the waveform for the current time as a Float32Array where each value in the array represents a sample in the waveform.
class WaveShaper
class WaveShaper extends SignalOperator<WaveShaperOptions> {}
Wraps the native Web Audio API [WaveShaperNode](http://webaudio.github.io/web-audio-api/#the-waveshapernode-interface).
Example 1
const osc = new Tone.Oscillator().toDestination().start(); // multiply the output of the signal by 2 using the waveshaper's function const timesTwo = new Tone.WaveShaper((val) => val * 2, 2048).connect(osc.frequency); const signal = new Tone.Signal(440).connect(timesTwo); Signal
constructor
constructor(mapping?: WaveShaperMapping, length?: number);
Parameter mapping
The function used to define the values. The mapping function should take two arguments: the first is the value at the current position and the second is the array position. If the argument is an array, that array will be set as the wave shaping function. The input signal is an AudioRange [-1, 1] value and the output signal can take on any numerical values.
Parameter length
The length of the WaveShaperNode buffer.
constructor
constructor(options?: Partial<WaveShaperOptions>);
property curve
curve: Float32Array;
The array to set as the waveshaper curve. For linear curves array length does not make much difference, but for complex curves longer arrays will provide smoother interpolation.
property input
input: WaveShaperNode;
The input to the waveshaper node.
property name
readonly name: string;
property output
output: WaveShaperNode;
The output from the waveshaper node
property oversample
oversample: OverSampleType;
Specifies what type of oversampling (if any) should be used when applying the shaping curve. Can either be "none", "2x" or "4x".
method dispose
dispose: () => this;
Clean up.
method getDefaults
static getDefaults: () => WaveShaperOptions;
method setMap
setMap: (mapping: WaveShaperMappingFn, length?: number) => this;
Uses a mapping function to set the value of the curve.
Parameter mapping
The function used to define the values. The mapping function take two arguments: the first is the value at the current position which goes from -1 to 1 over the number of elements in the curve array. The second argument is the array position.
Example 1
const shaper = new Tone.WaveShaper(); // map the input signal from [-1, 1] to [0, 10] shaper.setMap((val, index) => (val + 1) * 5);
class Zero
class Zero extends SignalOperator<ToneAudioNodeOptions> {}
Tone.Zero outputs 0's at audio-rate. The reason this has to be it's own class is that many browsers optimize out Tone.Signal with a value of 0 and will not process nodes further down the graph. Signal
constructor
constructor(options?: Partial<ToneWithContextOptions>);
property input
input: undefined;
no input node
property name
readonly name: string;
property output
output: Gain<'gain'>;
Only outputs 0
method dispose
dispose: () => this;
clean up
Interfaces
interface AMOscillatorOptions
interface AMOscillatorOptions extends ToneOscillatorOptions {}
property harmonicity
harmonicity: Positive;
property modulationType
modulationType: AllNonCustomOscillatorType;
interface AnalyserOptions
interface AnalyserOptions extends ToneAudioNodeOptions {}
interface AutoFilterOptions
interface AutoFilterOptions extends LFOEffectOptions {}
property baseFrequency
baseFrequency: Frequency;
property filter
filter: Omit< FilterOptions, keyof SourceOptions | 'frequency' | 'detune' | 'gain'>;
property octaves
octaves: Positive;
interface AutoPannerOptions
interface AutoPannerOptions extends LFOEffectOptions {}
property channelCount
channelCount: number;
interface AutoWahOptions
interface AutoWahOptions extends EffectOptions {}
property baseFrequency
baseFrequency: Frequency;
property follower
follower: Time;
property gain
gain: GainFactor;
property octaves
octaves: Positive;
property Q
Q: Positive;
property sensitivity
sensitivity: Decibels;
interface BiquadFilterOptions
interface BiquadFilterOptions extends ToneAudioNodeOptions {}
interface BitCrusherOptions
interface BitCrusherOptions extends EffectOptions {}
property bits
bits: Positive;
interface ChannelOptions
interface ChannelOptions extends ToneAudioNodeOptions {}
property channelCount
channelCount: number;
property mute
mute: boolean;
property pan
pan: AudioRange;
property solo
solo: boolean;
property volume
volume: Decibels;
interface ChebyshevOptions
interface ChebyshevOptions extends EffectOptions {}
property order
order: Positive;
property oversample
oversample: OverSampleType;
interface ChorusOptions
interface ChorusOptions extends StereoFeedbackEffectOptions {}
interface CompressorOptions
interface CompressorOptions extends ToneAudioNodeOptions {}
interface ContextOptions
interface ContextOptions {}
property clockSource
clockSource: TickerClockSource;
property context
context: AnyAudioContext;
property latencyHint
latencyHint: ContextLatencyHint;
property lookAhead
lookAhead: Seconds;
property updateInterval
updateInterval: Seconds;
interface ContextTimeoutEvent
interface ContextTimeoutEvent {}
interface ConvolverOptions
interface ConvolverOptions extends ToneAudioNodeOptions {}
interface DelayOptions
interface DelayOptions extends ToneAudioNodeOptions {}
interface DistortionOptions
interface DistortionOptions extends EffectOptions {}
property distortion
distortion: number;
property oversample
oversample: OverSampleType;
interface DuoSynthOptions
interface DuoSynthOptions extends MonophonicOptions {}
property harmonicity
harmonicity: Positive;
property vibratoAmount
vibratoAmount: Positive;
property vibratoRate
vibratoRate: Frequency;
property voice0
voice0: Omit<MonoSynthOptions, keyof MonophonicOptions>;
property voice1
voice1: Omit<MonoSynthOptions, keyof MonophonicOptions>;
interface EmitterEventObject
interface EmitterEventObject {}
index signature
[event: string]: Array<(...args: any[]) => void>;
interface EnvelopeOptions
interface EnvelopeOptions extends ToneAudioNodeOptions {}
property attack
attack: Time;
property attackCurve
attackCurve: EnvelopeCurve;
property decay
decay: Time;
property decayCurve
decayCurve: BasicEnvelopeCurve;
property release
release: Time;
property releaseCurve
releaseCurve: EnvelopeCurve;
property sustain
sustain: NormalRange;
interface FatOscillatorOptions
interface FatOscillatorOptions extends ToneOscillatorOptions {}
interface FeedbackCombFilterOptions
interface FeedbackCombFilterOptions extends ToneAudioNodeOptions {}
interface FFTOptions
interface FFTOptions extends MeterBaseOptions {}
property normalRange
normalRange: boolean;
property size
size: PowerOfTwo;
property smoothing
smoothing: NormalRange;
interface FMOscillatorOptions
interface FMOscillatorOptions extends ToneOscillatorOptions {}
property harmonicity
harmonicity: Positive;
property modulationIndex
modulationIndex: Positive;
property modulationType
modulationType: AllNonCustomOscillatorType;
interface FMSynthOptions
interface FMSynthOptions extends ModulationSynthOptions {}
property modulationIndex
modulationIndex: Positive;
interface FollowerOptions
interface FollowerOptions extends ToneAudioNodeOptions {}
property smoothing
smoothing: Time;
interface FreeverbOptions
interface FreeverbOptions extends StereoEffectOptions {}
interface FrequencyEnvelopeOptions
interface FrequencyEnvelopeOptions extends EnvelopeOptions {}
property baseFrequency
baseFrequency: Frequency;
property exponent
exponent: number;
property octaves
octaves: number;
interface GateOptions
interface GateOptions extends ToneAudioNodeOptions {}
interface IntervalTimelineEvent
interface IntervalTimelineEvent {}
An IntervalTimeline event must have a time and duration
property duration
duration: number;
property time
time: number;
index signature
[propName: string]: any;
interface JCReverbOptions
interface JCReverbOptions extends StereoEffectOptions {}
property roomSize
roomSize: NormalRange;
interface LimiterOptions
interface LimiterOptions extends ToneAudioNodeOptions {}
property threshold
threshold: Decibels;
interface LoopOptions
interface LoopOptions extends ToneWithContextOptions {}
property callback
callback: (time: Seconds) => void;
property humanize
humanize: boolean | Time;
property interval
interval: Time;
property iterations
iterations: number;
property mute
mute: boolean;
property playbackRate
playbackRate: Positive;
property probability
probability: NormalRange;
interface MembraneSynthOptions
interface MembraneSynthOptions extends SynthOptions {}
property octaves
octaves: Positive;
property pitchDecay
pitchDecay: Time;
interface MetalSynthOptions
interface MetalSynthOptions extends MonophonicOptions {}
property envelope
envelope: Omit<EnvelopeOptions, keyof ToneAudioNodeOptions>;
property harmonicity
harmonicity: Positive;
property modulationIndex
modulationIndex: Positive;
property octaves
octaves: number;
property resonance
resonance: Frequency;
interface MeterOptions
interface MeterOptions extends MeterBaseOptions {}
property channelCount
channelCount: number;
property normalRange
normalRange: boolean;
property smoothing
smoothing: NormalRange;
interface MidSideCompressorOptions
interface MidSideCompressorOptions extends ToneAudioNodeOptions {}
interface MonoSynthOptions
interface MonoSynthOptions extends MonophonicOptions {}
property envelope
envelope: Omit<EnvelopeOptions, keyof ToneAudioNodeOptions>;
property filter
filter: Omit<FilterOptions, keyof ToneAudioNodeOptions>;
property filterEnvelope
filterEnvelope: Omit<FrequencyEnvelopeOptions, keyof ToneAudioNodeOptions>;
property oscillator
oscillator: OmniOscillatorSynthOptions;
interface MultibandCompressorOptions
interface MultibandCompressorOptions extends ToneAudioNodeOptions {}
property high
high: Omit<CompressorOptions, keyof ToneAudioNodeOptions>;
property highFrequency
highFrequency: Frequency;
property low
low: Omit<CompressorOptions, keyof ToneAudioNodeOptions>;
property lowFrequency
lowFrequency: Frequency;
property mid
mid: Omit<CompressorOptions, keyof ToneAudioNodeOptions>;
interface NoiseOptions
interface NoiseOptions extends SourceOptions {}
property fadeIn
fadeIn: Time;
property fadeOut
fadeOut: Time;
property playbackRate
playbackRate: Positive;
property type
type: NoiseType;
interface NoiseSynthOptions
interface NoiseSynthOptions extends InstrumentOptions {}
interface OnePoleFilterOptions
interface OnePoleFilterOptions extends ToneAudioNodeOptions {}
interface Panner3DOptions
interface Panner3DOptions extends ToneAudioNodeOptions {}
property coneInnerAngle
coneInnerAngle: Degrees;
property coneOuterAngle
coneOuterAngle: Degrees;
property coneOuterGain
coneOuterGain: GainFactor;
property distanceModel
distanceModel: DistanceModelType;
property maxDistance
maxDistance: number;
property orientationX
orientationX: number;
property orientationY
orientationY: number;
property orientationZ
orientationZ: number;
property panningModel
panningModel: PanningModelType;
property positionX
positionX: number;
property positionY
positionY: number;
property positionZ
positionZ: number;
property refDistance
refDistance: number;
property rolloffFactor
rolloffFactor: number;
interface PanVolOptions
interface PanVolOptions extends ToneAudioNodeOptions {}
property channelCount
channelCount: number;
property mute
mute: boolean;
property pan
pan: AudioRange;
property volume
volume: Decibels;
interface ParamOptions
interface ParamOptions<TypeName extends UnitName> extends ToneWithContextOptions {}
interface PatternOptions
interface PatternOptions<ValueType> extends LoopOptions {}
interface PhaserOptions
interface PhaserOptions extends StereoEffectOptions {}
property baseFrequency
baseFrequency: Frequency;
property frequency
frequency: Frequency;
property octaves
octaves: Positive;
property Q
Q: Positive;
property stages
stages: Positive;
interface PingPongDelayOptions
interface PingPongDelayOptions extends StereoXFeedbackEffectOptions {}
interface PitchShiftOptions
interface PitchShiftOptions extends FeedbackEffectOptions {}
property delayTime
delayTime: Time;
property pitch
pitch: Interval;
property windowSize
windowSize: Seconds;
interface PlayerOptions
interface PlayerOptions extends SourceOptions {}
property autostart
autostart: boolean;
property fadeIn
fadeIn: Time;
property fadeOut
fadeOut: Time;
property loop
loop: boolean;
property loopEnd
loopEnd: Time;
property loopStart
loopStart: Time;
property onerror
onerror: (error: Error) => void;
property onload
onload: () => void;
property playbackRate
playbackRate: Positive;
property reverse
reverse: boolean;
property url
url?: ToneAudioBuffer | string | AudioBuffer;
interface PlayersOptions
interface PlayersOptions extends SourceOptions {}
interface PluckSynthOptions
interface PluckSynthOptions extends InstrumentOptions {}
property attackNoise
attackNoise: number;
property dampening
dampening: Frequency;
property release
release: Time;
property resonance
resonance: NormalRange;
interface PolySynthOptions
interface PolySynthOptions<Voice> extends InstrumentOptions {}
property maxPolyphony
maxPolyphony: number;
property options
options: PartialVoiceOptions<Voice>;
property voice
voice: VoiceConstructor<Voice>;
interface PowOptions
interface PowOptions extends ToneAudioNodeOptions {}
property value
value: number;
interface PulseOscillatorOptions
interface PulseOscillatorOptions extends BaseOscillatorOptions {}
Pulse Oscillator
interface PWMOscillatorOptions
interface PWMOscillatorOptions extends BaseOscillatorOptions {}
PWM Oscillator
property modulationFrequency
modulationFrequency: Frequency;
property type
type: 'pwm';
interface RecorderOptions
interface RecorderOptions extends ToneAudioNodeOptions {}
property mimeType
mimeType?: string;
interface SamplerOptions
interface SamplerOptions extends InstrumentOptions {}
interface ScaleExpOptions
interface ScaleExpOptions extends ScaleOptions {}
property exponent
exponent: Positive;
interface ScaleOptions
interface ScaleOptions extends ToneAudioNodeOptions {}
interface SignalOptions
interface SignalOptions<TypeName extends UnitName> extends ToneAudioNodeOptions {}
interface SoloOptions
interface SoloOptions extends ToneAudioNodeOptions {}
property solo
solo: boolean;
interface StateTimelineEvent
interface StateTimelineEvent extends TimelineEvent {}
property state
state: PlaybackState;
interface StereoWidenerOptions
interface StereoWidenerOptions extends MidSideEffectOptions {}
property width
width: NormalRange;
interface SynthOptions
interface SynthOptions extends MonophonicOptions {}
property envelope
envelope: Omit<EnvelopeOptions, keyof ToneAudioNodeOptions>;
property oscillator
oscillator: OmniOscillatorSynthOptions;
interface TimelineEvent
interface TimelineEvent {}
An event must have a time number
property time
time: number;
interface ToneAudioBuffersUrlMap
interface ToneAudioBuffersUrlMap {}
index signature
[name: string]: string | AudioBuffer | ToneAudioBuffer;
index signature
[name: number]: string | AudioBuffer | ToneAudioBuffer;
interface ToneBufferSourceOptions
interface ToneBufferSourceOptions extends OneShotSourceOptions {}
property curve
curve: ToneBufferSourceCurve;
property fadeIn
fadeIn: Time;
property fadeOut
fadeOut: Time;
property loop
loop: boolean;
property loopEnd
loopEnd: Time;
property loopStart
loopStart: Time;
property onerror
onerror: (error: Error) => void;
property onload
onload: () => void;
property playbackRate
playbackRate: Positive;
property url
url: string | AudioBuffer | ToneAudioBuffer;
interface ToneEventOptions
interface ToneEventOptions<T> extends ToneWithContextOptions {}
property callback
callback: ToneEventCallback<T>;
property humanize
humanize: boolean | Time;
property loop
loop: boolean | number;
property loopEnd
loopEnd: Time;
property loopStart
loopStart: Time;
property mute
mute: boolean;
property playbackRate
playbackRate: Positive;
property probability
probability: NormalRange;
property value
value?: T;
interface ToneOscillatorNodeOptions
interface ToneOscillatorNodeOptions extends OneShotSourceOptions {}
interface ToneOscillatorOptions
interface ToneOscillatorOptions extends BaseOscillatorOptions {}
property partialCount
partialCount: number;
property partials
partials: number[];
property type
type: ToneOscillatorType;
interface TremoloOptions
interface TremoloOptions extends StereoEffectOptions {}
interface UserMediaOptions
interface UserMediaOptions extends ToneAudioNodeOptions {}
interface VibratoOptions
interface VibratoOptions extends EffectOptions {}
interface WaveformOptions
interface WaveformOptions extends MeterBaseOptions {}
property size
size: PowerOfTwo;
The size of the Waveform. Value must be a power of two in the range 16 to 16384.
Type Aliases
type AMSynthOptions
type AMSynthOptions = ModulationSynthOptions;
type AnalyserType
type AnalyserType = 'fft' | 'waveform';
type AutomationEvent
type AutomationEvent = NormalAutomationEvent | TargetAutomationEvent;
The events on the automation
type BaseAudioContextSubset
type BaseAudioContextSubset = Omit<BaseAudioContext, ExcludedFromBaseAudioContext>;
type BasicPlaybackState
type BasicPlaybackState = 'started' | 'stopped';
type ContextLatencyHint
type ContextLatencyHint = AudioContextLatencyCategory;
type DCMeterOptions
type DCMeterOptions = MeterBaseOptions;
type EnvelopeCurve
type EnvelopeCurve = EnvelopeCurveName | number[];
type ExcludedFromBaseAudioContext
type ExcludedFromBaseAudioContext = | 'onstatechange' | 'addEventListener' | 'removeEventListener' | 'listener' | 'dispatchEvent' | 'audioWorklet' | 'destination' | 'createScriptProcessor';
type FilterOptions
type FilterOptions = BiquadFilterOptions & { rolloff: FilterRollOff;};
type FilterRollOff
type FilterRollOff = -12 | -24 | -48 | -96;
type FrequencyUnit
type FrequencyUnit = TimeBaseUnit | 'midi';
type GreaterThanOptions
type GreaterThanOptions = SignalOptions<'number'>;
type GreaterThanZeroOptions
type GreaterThanZeroOptions = SignalOperatorOptions;
type InputNode
type InputNode = ToneAudioNode | AudioNode | Param<any> | AudioParam;
type LFOOptions
type LFOOptions = { min: number; max: number; amplitude: NormalRange; units: UnitName;} & ToneOscillatorOptions;
type MidSideMergeOptions
type MidSideMergeOptions = ToneAudioNodeOptions;
type MidSideSplitOptions
type MidSideSplitOptions = ToneAudioNodeOptions;
type MonoOptions
type MonoOptions = ToneAudioNodeOptions;
type NoiseType
type NoiseType = 'white' | 'brown' | 'pink';
type OmniOscillatorOptions
type OmniOscillatorOptions = | PulseOscillatorOptions | PWMOscillatorOptions | OmniFatCustomOscillatorOptions | OmniFatTypeOscillatorOptions | OmniFatPartialsOscillatorOptions | OmniFMCustomOscillatorOptions | OmniFMTypeOscillatorOptions | OmniFMPartialsOscillatorOptions | OmniAMCustomOscillatorOptions | OmniAMTypeOscillatorOptions | OmniAMPartialsOscillatorOptions | ToneOscillatorConstructorOptions;
type OmniOscSourceType
type OmniOscSourceType = keyof OmniOscillatorSource;
The available oscillator types.
type OnePoleFilterType
type OnePoleFilterType = 'highpass' | 'lowpass';
type OutputNode
type OutputNode = ToneAudioNode | AudioNode;
type PlaybackState
type PlaybackState = BasicPlaybackState | 'paused';
type ToneAudioNodeOptions
type ToneAudioNodeOptions = ToneWithContextOptions;
The possible options for this node
type ToneBufferSourceCurve
type ToneBufferSourceCurve = OneShotSourceCurve;
type ToneEventCallback
type ToneEventCallback<T> = (time: Seconds, value: T) => void;
type ToneOscillatorType
type ToneOscillatorType = AllNonCustomOscillatorType | 'custom';
type WaveShaperMappingFn
type WaveShaperMappingFn = (value: number, index?: number) => number;
Namespaces
namespace debug
module 'build/esm/core/util/Debug.d.ts' {}
Assert that the statement is true, otherwise invoke the error.
Parameter statement
Parameter error
The message which is passed into an Error
function assert
assert: (statement: boolean, error: string) => asserts statement;
Assert that the statement is true, otherwise invoke the error.
Parameter statement
Parameter error
The message which is passed into an Error
function assertContextRunning
assertContextRunning: (context: BaseContext) => void;
Warn if the context is not running.
function assertRange
assertRange: (value: number, gte: number, lte?: number) => void;
Make sure that the given value is within the range
function assertUsedScheduleTime
assertUsedScheduleTime: (time?: Time) => void;
Make sure that a time was passed into
function enterScheduledCallback
enterScheduledCallback: (insideCallback: boolean) => void;
Notify that the following block of code is occurring inside a Transport callback.
function log
log: (...args: any[]) => void;
Log anything
function setLogger
setLogger: (logger: Logger) => void;
Set the logging interface
function warn
warn: (...args: any[]) => void;
Warn anything
namespace Unit
module 'build/esm/core/type/Units.d.ts' {}
A number representing a time in seconds Unit
interface UnitMap
interface UnitMap {}
Map the unit name to a unit value
property audioRange
audioRange: AudioRange;
property bpm
bpm: BPM;
property cents
cents: Cents;
property decibels
decibels: Decibels;
property degrees
degrees: Degrees;
property frequency
frequency: Frequency;
property gain
gain: GainFactor;
property hertz
hertz: Hertz;
property normalRange
normalRange: NormalRange;
property number
number: number;
property positive
positive: Positive;
property radians
radians: Radians;
property samples
samples: Samples;
property ticks
ticks: Ticks;
property time
time: Time;
property transportTime
transportTime: TransportTime;
type AudioRange
type AudioRange = number;
A number that is between [-1, 1] Unit
type BarsBeatsSixteenths
type BarsBeatsSixteenths = `${number}:${number}:${number}`;
A colon-separated representation of time in the form of Bars:Beats:Sixteenths. Unit
type BPM
type BPM = number;
Beats per minute Unit
type Cents
type Cents = number;
A Cent is 1/100th of a semitone. e.g. a value of 50 cents would be halfway between two intervals. Unit
type Decibels
type Decibels = number;
A number used to measure the intensity of a sound on a logarithmic scale. Unit
type Degrees
type Degrees = number;
Angle between 0 and 360. Unit
type Frequency
type Frequency = Subdivision | Note | string | Hertz;
Frequency can be described similar to time, except ultimately the values are converted to frequency instead of seconds. A number is taken literally as the value in hertz. Additionally any of the Time encodings can be used. Note names in the form of NOTE OCTAVE (i.e. C4) are also accepted and converted to their frequency value. Unit
type GainFactor
type GainFactor = number;
A number representing the multiplication factor applied to a signal Unit
type Hertz
type Hertz = number;
Hertz are a frequency representation defined as one cycle per second. Unit
type Interval
type Interval = number;
Half-step note increments, i.e. 12 is an octave above the root. and 1 is a half-step up. Unit
type MidiNote
type MidiNote = IntegerRange<128>;
A number representing a midi note. Integers between 0-127 Unit
type Milliseconds
type Milliseconds = number;
One millisecond is a thousandth of a second. Unit
type NormalRange
type NormalRange = number;
A number that is between [0, 1] Unit
type Note
type Note = `${Letter}${Accidental}${Octave}`;
A note in Scientific pitch notation. The pitch class + octave number e.g. "C4", "D#3", "G-1" Unit
type Positive
type Positive = number;
A number greater than or equal to 0. Unit
type PowerOfTwo
type PowerOfTwo = number;
A value which is a power of 2 Unit
type Radians
type Radians = number;
Angle between 0 and 2 * PI. Unit
type Samples
type Samples = number;
Sampling is the reduction of a continuous signal to a discrete signal. Audio is typically sampled 44100 times per second. Unit
type Seconds
type Seconds = number;
A number representing a time in seconds Unit
type Subdivision
type Subdivision = | '1m' | '1n' | '1n.' | `${2 | 4 | 8 | 16 | 32 | 64 | 128 | 256}${'n' | 'n.' | 't'}` | '0';
Represents a subdivision of a measure. The number represents the subdivision. "t" represents a triplet. A "." add a half. e.g. "4n" is a quarter note, "4t" is a quarter note triplet, and "4n." is a dotted quarter note. Unit
type Ticks
type Ticks = number;
Ticks are the basic subunit of the Transport. They are the smallest unit of time that the Transport supports. Unit
type Time
type Time = string | Seconds | TimeObject | Subdivision;
Time can be described in a number of ways. Read more [Time](https://github.com/Tonejs/Tone.js/wiki/Time). * Numbers, which will be taken literally as the time (in seconds). * Notation, ("4n", "8t") describes time in BPM and time signature relative values. * TransportTime, ("4:3:2") will also provide tempo and time signature relative times in the form BARS:QUARTERS:SIXTEENTHS. * Frequency, ("8hz") is converted to the length of the cycle in seconds. * Now-Relative, ("+1") prefix any of the above with "+" and it will be interpreted as "the current time plus whatever expression follows". * Object, ({"4n" : 3, "8t" : -1}). The resulting time is equal to the sum of all of the keys multiplied by the values in the object. * No Argument, for methods which accept time, no argument will be interpreted as "now" (i.e. the currentTime). Unit
type TimeObject
type TimeObject = { [sub in Subdivision]?: number;};
A time object has a subdivision as the keys and a number as the values.
Example 1
Tone.Time({ "2n": 1, "8n": 3 }).valueOf(); // 2n + 8n * 3 Unit
type TimeSignature
type TimeSignature = number | number[];
Unit
type TransportTime
type TransportTime = Time;
TransportTime describes a position along the Transport's timeline. It is similar to Time in that it uses all the same encodings, but TransportTime specifically pertains to the Transport's timeline, which is startable, stoppable, loopable, and seekable. [Read more](https://github.com/Tonejs/Tone.js/wiki/TransportTime) Unit
type Unit
type Unit = UnitMap[keyof UnitMap];
All of the unit types Unit
type UnitName
type UnitName = keyof UnitMap;
All of the unit names Unit
Package Files (129)
- build/esm/component/analysis/Analyser.d.ts
- build/esm/component/analysis/DCMeter.d.ts
- build/esm/component/analysis/FFT.d.ts
- build/esm/component/analysis/Follower.d.ts
- build/esm/component/analysis/Meter.d.ts
- build/esm/component/analysis/Waveform.d.ts
- build/esm/component/channel/Channel.d.ts
- build/esm/component/channel/CrossFade.d.ts
- build/esm/component/channel/Merge.d.ts
- build/esm/component/channel/MidSideMerge.d.ts
- build/esm/component/channel/MidSideSplit.d.ts
- build/esm/component/channel/Mono.d.ts
- build/esm/component/channel/MultibandSplit.d.ts
- build/esm/component/channel/PanVol.d.ts
- build/esm/component/channel/Panner.d.ts
- build/esm/component/channel/Panner3D.d.ts
- build/esm/component/channel/Recorder.d.ts
- build/esm/component/channel/Solo.d.ts
- build/esm/component/channel/Split.d.ts
- build/esm/component/channel/Volume.d.ts
- build/esm/component/dynamics/Compressor.d.ts
- build/esm/component/dynamics/Gate.d.ts
- build/esm/component/dynamics/Limiter.d.ts
- build/esm/component/dynamics/MidSideCompressor.d.ts
- build/esm/component/dynamics/MultibandCompressor.d.ts
- build/esm/component/envelope/AmplitudeEnvelope.d.ts
- build/esm/component/envelope/Envelope.d.ts
- build/esm/component/envelope/FrequencyEnvelope.d.ts
- build/esm/component/filter/BiquadFilter.d.ts
- build/esm/component/filter/Convolver.d.ts
- build/esm/component/filter/EQ3.d.ts
- build/esm/component/filter/FeedbackCombFilter.d.ts
- build/esm/component/filter/Filter.d.ts
- build/esm/component/filter/LowpassCombFilter.d.ts
- build/esm/component/filter/OnePoleFilter.d.ts
- build/esm/core/Global.d.ts
- build/esm/core/clock/Clock.d.ts
- build/esm/core/context/BaseContext.d.ts
- build/esm/core/context/Context.d.ts
- build/esm/core/context/Delay.d.ts
- build/esm/core/context/Gain.d.ts
- build/esm/core/context/Offline.d.ts
- build/esm/core/context/OfflineContext.d.ts
- build/esm/core/context/Param.d.ts
- build/esm/core/context/ToneAudioBuffer.d.ts
- build/esm/core/context/ToneAudioBuffers.d.ts
- build/esm/core/context/ToneAudioNode.d.ts
- build/esm/core/type/Conversions.d.ts
- build/esm/core/type/Frequency.d.ts
- build/esm/core/type/Midi.d.ts
- build/esm/core/type/NoteUnits.d.ts
- build/esm/core/type/Ticks.d.ts
- build/esm/core/type/Time.d.ts
- build/esm/core/type/TransportTime.d.ts
- build/esm/core/type/Units.d.ts
- build/esm/core/util/Debug.d.ts
- build/esm/core/util/Emitter.d.ts
- build/esm/core/util/IntervalTimeline.d.ts
- build/esm/core/util/StateTimeline.d.ts
- build/esm/core/util/Timeline.d.ts
- build/esm/core/util/TypeCheck.d.ts
- build/esm/effect/AutoFilter.d.ts
- build/esm/effect/AutoPanner.d.ts
- build/esm/effect/AutoWah.d.ts
- build/esm/effect/BitCrusher.d.ts
- build/esm/effect/Chebyshev.d.ts
- build/esm/effect/Chorus.d.ts
- build/esm/effect/Distortion.d.ts
- build/esm/effect/FeedbackDelay.d.ts
- build/esm/effect/Freeverb.d.ts
- build/esm/effect/FrequencyShifter.d.ts
- build/esm/effect/JCReverb.d.ts
- build/esm/effect/Phaser.d.ts
- build/esm/effect/PingPongDelay.d.ts
- build/esm/effect/PitchShift.d.ts
- build/esm/effect/Reverb.d.ts
- build/esm/effect/StereoWidener.d.ts
- build/esm/effect/Tremolo.d.ts
- build/esm/effect/Vibrato.d.ts
- build/esm/event/Loop.d.ts
- build/esm/event/Part.d.ts
- build/esm/event/Pattern.d.ts
- build/esm/event/Sequence.d.ts
- build/esm/event/ToneEvent.d.ts
- build/esm/index.d.ts
- build/esm/instrument/AMSynth.d.ts
- build/esm/instrument/DuoSynth.d.ts
- build/esm/instrument/FMSynth.d.ts
- build/esm/instrument/MembraneSynth.d.ts
- build/esm/instrument/MetalSynth.d.ts
- build/esm/instrument/MonoSynth.d.ts
- build/esm/instrument/NoiseSynth.d.ts
- build/esm/instrument/PluckSynth.d.ts
- build/esm/instrument/PolySynth.d.ts
- build/esm/instrument/Sampler.d.ts
- build/esm/instrument/Synth.d.ts
- build/esm/signal/Abs.d.ts
- build/esm/signal/Add.d.ts
- build/esm/signal/AudioToGain.d.ts
- build/esm/signal/GainToAudio.d.ts
- build/esm/signal/GreaterThan.d.ts
- build/esm/signal/GreaterThanZero.d.ts
- build/esm/signal/Multiply.d.ts
- build/esm/signal/Negate.d.ts
- build/esm/signal/Pow.d.ts
- build/esm/signal/Scale.d.ts
- build/esm/signal/ScaleExp.d.ts
- build/esm/signal/Signal.d.ts
- build/esm/signal/Subtract.d.ts
- build/esm/signal/SyncedSignal.d.ts
- build/esm/signal/WaveShaper.d.ts
- build/esm/signal/Zero.d.ts
- build/esm/source/Noise.d.ts
- build/esm/source/UserMedia.d.ts
- build/esm/source/buffer/GrainPlayer.d.ts
- build/esm/source/buffer/Player.d.ts
- build/esm/source/buffer/Players.d.ts
- build/esm/source/buffer/ToneBufferSource.d.ts
- build/esm/source/oscillator/AMOscillator.d.ts
- build/esm/source/oscillator/FMOscillator.d.ts
- build/esm/source/oscillator/FatOscillator.d.ts
- build/esm/source/oscillator/LFO.d.ts
- build/esm/source/oscillator/OmniOscillator.d.ts
- build/esm/source/oscillator/Oscillator.d.ts
- build/esm/source/oscillator/OscillatorInterface.d.ts
- build/esm/source/oscillator/PWMOscillator.d.ts
- build/esm/source/oscillator/PulseOscillator.d.ts
- build/esm/source/oscillator/ToneOscillatorNode.d.ts
- build/esm/version.d.ts
Dependencies (2)
Dev Dependencies (43)
- @rollup/plugin-commonjs
- @types/chai
- @types/mocha
- @types/ua-parser-js
- @typescript-eslint/eslint-plugin
- @typescript-eslint/parser
- @web/dev-server-esbuild
- @web/dev-server-rollup
- @web/test-runner
- @web/test-runner-puppeteer
- array2d
- audiobuffer-to-wav
- chai
- codecov
- concurrently
- eslint
- eslint-plugin-file-extension-in-import-ts
- eslint-plugin-html
- eslint-plugin-jsdoc
- fft-windowing
- fourier-transform
- fs-extra
- glob
- html-webpack-plugin
- http-server
- jsdom
- mocha
- plotly.js-dist
- prettier
- rimraf
- semver
- showdown
- teoria
- tmp-promise
- tonal
- ts-loader
- typedoc
- typescript
- ua-parser-js
- webpack
- webpack-cli
- yargs
- zx
Peer Dependencies (0)
No peer dependencies.
Badge
To add a badge like this oneto your package's README, use the codes available below.
You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/tone
.
- Markdown[![jsDocs.io](https://img.shields.io/badge/jsDocs.io-reference-blue)](https://www.jsdocs.io/package/tone)
- HTML<a href="https://www.jsdocs.io/package/tone"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>
- Updated .
Package analyzed in 20034 ms. - Missing or incorrect documentation? Open an issue for this package.