UnityEngine.AudioModule An implementation of IPlayable that controls an AudioClip. Creates an AudioClipPlayable in the PlayableGraph. The PlayableGraph that will contain the new AnimationLayerMixerPlayable. The AudioClip that will be added in the PlayableGraph. True if the clip should loop, false otherwise. A AudioClipPlayable linked to the PlayableGraph. AudioMixer asset. Routing target. How time should progress for this AudioMixer. Used during Snapshot transitions. Resets an exposed parameter to its initial value. Exposed parameter. Returns false if the parameter was not found or could not be set. Connected groups in the mixer form a path from the mixer's master group to the leaves. This path has the format Master GroupChild of Master GroupGrandchild of Master Group, and so on. For example, in the hierarchy below, the group DROPS has the path MasterWATERDROPS. To return only the group called DROPS, enter DROPS. The substring MasterAMBIENCE returns three groups, AMBIENCECROWD, AMBIENCEROAD, and AMBIENCE. The substring R would return both ROAD and RIVER. Path sub-strings to match with. Groups in the mixer whose paths match the specified search path. The name must be an exact match. Name of snapshot object to be returned. The snapshot identified by the name. Returns the value of the exposed parameter specified. If the parameter doesn't exist the function returns false. Prior to calling SetFloat and after ClearFloat has been called on this parameter the value returned will be that of the current snapshot or snapshot transition. Name of exposed parameter. Return value of exposed parameter. Returns false if the exposed parameter specified doesn't exist. AudioMixer.SetFloat sets the value of the exposed parameter specified. Once you call this function, mixer snapshots will no longer control the exposed parameter, and you can only modify the parameter using AudioMixer.SetFloat. The name of an exposed Audio Mixer group parameter. To expose a parameter, go to the Audio Mixer group's Inspector window, right click the parameter you want to expose, and choose Expose [parameter name] to script. Use to set the exposed Audio Mixer group parameter to a new value. Returns false if the exposed parameter was not found or snapshots are currently being edited. Transitions to a weighted mixture of the snapshots specified. This can be used for games that specify the game state as a continuum between states or for interpolating snapshots from a triangulated map location. The set of snapshots to be mixed. The mix weights for the snapshots specified. Relative time after which the mixture should be reached from any current state. Object representing a group in the mixer. Gain access to the AudioMixer that this AudioMixerGroup belongs to (Read Only). An implementation of IPlayable that controls an audio mixer. Object representing a snapshot in the mixer. Performs an interpolated transition towards this snapshot over the time interval specified. Relative time after which this snapshot should be reached from any current state. The mode in which an AudioMixer should update its time. Update the AudioMixer with scaled game time. Update the AudioMixer with unscaled realtime. A PlayableBinding that contains information representing an AudioPlayableOutput. Creates a PlayableBinding that contains information representing an AudioPlayableOutput. A reference to a UnityEngine.Object that acts as a key for this binding. The name of the AudioPlayableOutput. Returns a PlayableBinding that contains information that is used to create an AudioPlayableOutput. A IPlayableOutput implementation that will be used to play audio. Creates an AudioPlayableOutput in the PlayableGraph. The PlayableGraph that will contain the AnimationPlayableOutput. The name of the output. The AudioSource that will play the AudioPlayableOutput source Playable. A new AudioPlayableOutput attached to the PlayableGraph. Gets the state of output playback when seeking. Returns true if the output plays when seeking. Returns false otherwise. Returns an invalid AudioPlayableOutput. Controls whether the output should play when seeking. Set to true to play the output when seeking. Set to false to disable audio scrubbing on this output. Default is true. Represents an audio resource asset that you can play through an AudioSource. The Audio Chorus Filter takes an Audio Clip and processes it creating a chorus effect. Chorus delay in ms. 0.1 to 100.0. Default = 40.0 ms. Chorus modulation depth. 0.0 to 1.0. Default = 0.03. Volume of original signal to pass to output. 0.0 to 1.0. Default = 0.5. Chorus feedback. Controls how much of the wet signal gets fed back into the chorus buffer. 0.0 to 1.0. Default = 0.0. Chorus modulation rate in hz. 0.0 to 20.0. Default = 0.8 hz. Volume of 1st chorus tap. 0.0 to 1.0. Default = 0.5. Volume of 2nd chorus tap. This tap is 90 degrees out of phase of the first tap. 0.0 to 1.0. Default = 0.5. Volume of 3rd chorus tap. This tap is 90 degrees out of phase of the second tap. 0.0 to 1.0. Default = 0.5. A container for audio data. Returns true if this audio clip is ambisonic (read-only). The number of channels in the audio clip. (Read Only) The sample frequency of the clip in Hertz. (Read Only) Returns true if the AudioClip is ready to play (read-only). The length of the audio clip in seconds. (Read Only) Enable this property to load the AudioClip asynchronously in the background instead of on the main thread. Set this property in the Inspector (Read Only). Returns the current load state of the audio data associated with an AudioClip. The load type of the clip (read-only). Enable this property in the Inspector to preload audio data from the audio clip when loading the clip Asset (Read Only). The length of the audio clip in samples. (Read Only) Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Creates a user AudioClip with a name and with the given length in samples, channels and frequency. Name of clip. Number of sample frames. Number of channels per frame. Sample frequency of clip. Audio clip is played back in 3D. True if clip is streamed, that is if the pcmreadercallback generates data on the fly. This callback is invoked to generate a block of sample data. Non-streamed clips call this only once at creation time while streamed clips call this continuously. This callback is invoked whenever the clip loops or changes playback position. A reference to the created AudioClip. Fills an array with sample data from the audio clip. The array you want to fill with raw data from the audio clip. The index of where to start data extraction from the array of raw data. Returns 'true' if AudioClip retrieves the data successfully. Returns 'false' if the operation was unsuccessful. Fills an array with sample data from the audio clip. The array you want to fill with raw data from the audio clip. The index of where to start data extraction from the array of raw data. Returns 'true' if AudioClip retrieves the data successfully. Returns 'false' if the operation was unsuccessful. Loads the asset data of an AudioClip into memory, so it will immediately be ready to play. Returns true if the clip is loaded into memory. Unity calls this delegate each time AudioClip reads data. Array of floats containing data read from the clip. Unity calls this delegate each time AudioClip changes read position. The audio clip's new playback position in sample frames. Fills an audio clip with sample data from an array or Span. Overwrites existing data if necessary. Linear buffer of samples to write to the audio clip buffer. Offset from the start of the audio clip at which to begin writing sample data. Returns whether all samples were successfully written to the audio clip. This can return false if offsetSamples isn't a valid offset within the existing AudioClip, or if the data is empty. Fills an audio clip with sample data from an array or Span. Overwrites existing data if necessary. Linear buffer of samples to write to the audio clip buffer. Offset from the start of the audio clip at which to begin writing sample data. Returns whether all samples were successfully written to the audio clip. This can return false if offsetSamples isn't a valid offset within the existing AudioClip, or if the data is empty. Unloads the audio data associated with the clip. This works only for AudioClips that are based on actual sound file assets. Returns `true` if the audio data unloads successfully. Determines how the audio clip is loaded in. The audio data of the clip will be kept in memory in compressed form. The audio data is decompressed when the audio clip is loaded. Streams audio data from disk. An enum containing different compression types. AAC Audio Compression. Adaptive differential pulse-code modulation. Sony proprietary hardware format. Nintendo ADPCM audio compression format. Sony proprietory hardware codec. MPEG Audio Layer III. Uncompressed pulse-code modulation. Sony proprietary hardware format. Vorbis compression format. Specifies the current properties or desired properties to be set for the audio system. The length of the DSP buffer in samples determining the latency of sounds by the audio output device. The current maximum number of simultaneously audible sounds in the game. The maximum number of managed sounds in the game. Beyond this limit sounds will simply stop playing. The current sample rate of the audio output device used. The current speaker mode used by the audio output device. Value describes the current load state of the audio data associated with an AudioClip. Value returned by AudioClip.loadState for an AudioClip that has failed loading its audio data. Value returned by AudioClip.loadState for an AudioClip that has succeeded loading its audio data. Value returned by AudioClip.loadState for an AudioClip that is currently loading audio data. Value returned by AudioClip.loadState for an AudioClip that has no audio data loaded and where loading has not been initiated yet. The Audio Distortion Filter distorts the sound from an AudioSource or sounds reaching the AudioListener. Distortion value. 0.0 to 1.0. Default = 0.5. The Audio Echo Filter repeats a sound after a given Delay, attenuating the repetitions based on the Decay Ratio. Echo decay per delay. 0 to 1. 1.0 = No decay, 0.0 = total decay (i.e. simple 1 line delay). Default = 0.5. Echo delay in ms. 10 to 5000. Default = 500. Volume of original signal to pass to output. 0.0 to 1.0. Default = 1.0. Volume of echo signal to pass to output. 0.0 to 1.0. Default = 1.0. The Audio High Pass Filter passes high frequencies of an AudioSource, and cuts off signals with frequencies lower than the Cutoff Frequency. Highpass cutoff frequency in hz. 10.0 to 22000.0. Default = 5000.0. Determines how much the filter's self-resonance isdampened. Representation of a listener in 3D space. The paused state of the audio system. This lets you set whether the Audio Listener should be updated in the fixed or dynamic update. Controls the game sound volume (0.0 to 1.0). Provides a block of the listener (master)'s output data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. Deprecated Version. Returns a block of the listener (master)'s output data. Provides a block of the listener (master)'s spectrum data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. The FFTWindow type to use when sampling. Deprecated Version. Returns a block of the listener (master)'s spectrum data. Number of values (the length of the samples array). Must be a power of 2. Min = 64. Max = 8192. The channel to sample from. The FFTWindow type to use when sampling. The Audio Low Pass Filter passes low frequencies of an AudioSource or all sounds reaching an AudioListener, while removing frequencies higher than the Cutoff Frequency. Returns or sets the current custom frequency cutoff curve. Lowpass cutoff frequency in hz. 10.0 to 22000.0. Default = 5000.0. Determines how much the filter's self-resonance is dampened. Allow recording the main output of the game or specific groups in the AudioMixer. Returns the number of samples available since the last time AudioRenderer.Render was called. This is dependent on the frame capture rate. Number of samples available since last recorded frame. Performs the recording of the main output as well as any optional mixer groups that have been registered via AudioRenderer.AddMixerGroupSink. The buffer to write the sample data to. True if the recording succeeded. Enters audio recording mode. After this Unity will output silence until AudioRenderer.Stop is called. True if the engine was switched into output recording mode. False if it is already recording. Exits audio recording mode. After this audio output will be audible again. True if the engine was recording when this function was called. The Audio Reverb Filter takes an Audio Clip and distorts it to create a custom reverb effect. Decay HF Ratio : High-frequency to low-frequency decay time ratio. Ranges from 0.1 to 2.0. Default is 0.5. Reverberation decay time at low-frequencies in seconds. Ranges from 0.1 to 20.0. Default is 1.0. Reverberation density (modal density) in percent. Ranges from 0.0 to 100.0. Default is 100.0. Reverberation diffusion (echo density) in percent. Ranges from 0.0 to 100.0. Default is 100.0. Mix level of dry signal in output in millibels (mB). Ranges from -10000.0 to 0.0. Default is 0. Reference high frequency in hertz (Hz). Ranges from 1000.0 to 20000.0. Default is 5000.0. Reference low-frequency in hertz (Hz). Ranges from 20.0 to 1000.0. Default is 250.0. Late reverberation level relative to room effect in millibels (mB). Ranges from -10000.0 to 2000.0. Default is 0.0. Early reflections level relative to room effect in millibels (mB). Ranges from -10000.0 to 1000.0. Default is -10000.0. Late reverberation delay time relative to first reflection in seconds. Ranges from 0.0 to 0.1. Default is 0.04. Late reverberation level relative to room effect in millibels (mB). Ranges from -10000.0 to 2000.0. Default is 0.0. Set/Get reverb preset properties. Room effect level at low frequencies in millibels (mB). Ranges from -10000.0 to 0.0. Default is 0.0. Room effect high-frequency level re. low frequency level in millibels (mB). Ranges from -10000.0 to 0.0. Default is 0.0. Room effect low-frequency level in millibels (mB). Ranges from -10000.0 to 0.0. Default is 0.0. Reverb presets used by the Reverb Zone class and the audio reverb filter. Alley preset. Arena preset. Auditorium preset. Bathroom preset. Carpeted hallway preset. Cave preset. City preset. Concert hall preset. Dizzy preset. Drugged preset. Forest preset. Generic preset. Hallway preset. Hangar preset. Livingroom preset. Mountains preset. No reverb preset selected. Padded cell preset. Parking Lot preset. Plain preset. Psychotic preset. Quarry preset. Room preset. Sewer pipe preset. Stone corridor preset. Stoneroom preset. Underwater presset. User defined preset. Reverb Zones are used when you want to create location based ambient effects in the Scene. High-frequency to mid-frequency decay time ratio. Reverberation decay time at mid frequencies. Value that controls the modal density in the late reverberation decay. Value that controls the echo density in the late reverberation decay. The distance from the centerpoint that the reverb will not have any effect. Default = 15.0. The distance from the centerpoint that the reverb will have full effect at. Default = 10.0. Early reflections level relative to room effect. Initial reflection delay time. Late reverberation level relative to room effect. Late reverberation delay time relative to initial reflection. Set/Get reverb preset properties. Room effect level (at mid frequencies). Relative room effect level at high frequencies. Relative room effect level at low frequencies. Like rolloffscale in global settings, but for reverb room size effect. Reference high frequency (hz). Reference low frequency (hz). Rolloff modes that a 3D sound can have in an audio source. Use this when you want to use a custom rolloff. Use this mode when you want to lower the volume of your sound over the distance. Use this mode when you want a real-world rolloff. Controls the global audio settings from script. Returns the speaker mode capability of the current audio driver. (Read Only) Returns the current time of the audio system. Get the mixer's current output rate. AudioSettings.speakerMode is deprecated. Use AudioSettings.GetConfiguration and AudioSettings.Reset to adjust audio settings instead. A delegate called whenever the global audio settings are changed, either by AudioSettings.Reset or by an external device change such as the OS control panel changing the sample rate or because the default output device was changed, for example when plugging in an HDMI monitor or a USB headset. True if the change was caused by an device change. Returns the current configuration of the audio device and system. The values in the struct may then be modified and reapplied via AudioSettings.Reset. The new configuration to be applied. Get the mixer's buffer size in samples. Is the length of each buffer in the ring buffer. Is number of buffers. Returns the name of the spatializer selected on the currently-running platform. The spatializer plugin name. Returns an array with the names of all the available spatializer plugins. An array of spatializer names. This class encapsulates properties and methods to handle audio output thread on iOS/Android. Returns true if audio output thread is working. Returns true if current device media volume is 0. Set this property to true to make audio output thread automatically stop when device media volume is set to 0 and to start it again when volume is not 0. A delegate called whenever the device mute state is changed. True if current device media volume is 0, false otherwise. Starts audio output thread on Android/iOS. Stops audio thread on Android/iOS. Unity calls this event whenever the global audio settings change. This parameter is true if the user changes the audio output device during runtime. Changes the device configuration and invokes the AudioSettings.OnAudioConfigurationChanged delegate with the argument deviceWasChanged=false. There's no guarantee that the exact settings specified are used, but Unity automatically uses the closest match that it supports. Note: This can cause main thread stalls if AudioSettings.Reset is called when objects are loading asynchronously. The new configuration to be used. True if all settings could be successfully applied. Sets the spatializer plugin for all platform groups. If a null or empty string is passed in, the existing spatializer plugin will be cleared. The spatializer plugin name. A representation of audio sources in 3D. Bypass effects (Applied from filter components or global listener filters). When set, global effects on the AudioListener doesn't apply to the audio signal generated by the AudioSource. It also does'nt apply, if the AudioSource is playing into a mixer group. When set, it doesn't route the signal from an AudioSource into the global reverb associated with reverb zones. The default AudioClip to play. Sets the Doppler scale for this AudioSource. Gets or sets the gamepad audio output type for this audio source. Allows AudioSource to play even though AudioListener.pause is set to true. This is useful for the menu element sounds or background music in pause menus. This makes the audio source not take into account the volume of the audio listener. Returns whether the AudioSource is currently playing an AudioResource(Read Only). True if all sounds played by the AudioSource, such as main sound started by Play() or playOnAwake, and one-shots are culled by the audio system. Checks if the audio clip is looping The distance where sound either becomes inaudible or stops attenuation, depending on the rolloff mode. Within the Min distance the AudioSource will cease to grow louder in volume. Un- / Mutes the AudioSource. Mute sets the volume=0, Un-Mute restore the original volume. The target group to which the AudioSource should route its signal. Pan has been deprecated. Use panStereo instead. PanLevel has been deprecated. Use spatialBlend instead. Pans a playing sound in a stereo way (left or right). This only applies to sounds that are Mono or Stereo. The pitch of the audio source. Enable this property to automatically play the audio source when the component or GameObject becomes active. Sets the priority of the AudioSource. The default AudioResource to play. The amount by which the signal from the AudioSource will be mixed into the global reverb associated with the Reverb Zones. Sets/Gets how the AudioSource attenuates over distance. Sets how much this AudioSource is affected by 3D spatialisation calculations (attenuation, doppler etc). 0.0 makes the sound full 2D, 1.0 makes it full 3D. Enables or disables spatialization. Determines if the spatializer effect is inserted before or after the effect filters. Sets the spread angle (in degrees) of a 3d stereo or multichannel sound in speaker space. Playback position in seconds. The current playback position of the AudioSource in PCM samples. Whether the Audio Source should be updated in the fixed or dynamic update. The volume of the audio source (0.0 to 1.0). Disables audio output to a gamepad for this audio source. Returns true if successful. Check if the platform supports an audio output type on gamepads. The desired output type. Returns true if the gamepad supports the specified audio output type. Reads a user-defined parameter of a custom ambisonic decoder effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be read. Return value of the user-defined parameter that is read. True, if the parameter could be read. Get the current custom curve for the given AudioSourceCurveType. The curve type to get. The custom AnimationCurve corresponding to the given curve type. Provides a block of the currently playing source's output data. The array to populate with audio samples. Its length must be a power of 2. The channel to sample from. Deprecated Version. Returns a block of the currently playing source's output data. Reads a user-defined parameter of a custom spatializer effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be read. Return value of the user-defined parameter that is read. True, if the parameter could be read. Provides the block of audio frequencies (spectrum data) of the AudioSource that is currently playing. The array to populate with frequency domain representations of audio samples. The array length must be a power of 2 (such as 128, 256, 512). Also, the length must not be less than 64 or greater than 8192. The channel to sample from. The FFTWindow type to use when sampling. This version of GetSpectrumData is obsolete. The number of samples to retrieve. Must be a power of 2. The channel to sample from. The FFTWindow type to use when sampling. Returns a block of the currently playing source's spectrum data. Pauses playing the clip. Plays the clip. Deprecated. Delay in number of samples, assuming a 44100Hz sample rate (meaning that Play(44100) will delay the playing by exactly 1 sec). Plays the clip. Deprecated. Delay in number of samples, assuming a 44100Hz sample rate (meaning that Play(44100) will delay the playing by exactly 1 sec). Plays an AudioClip at a given position in world space. Audio data to play. Position in world space from which sound originates. Playback volume (range from 0.0 - 1.0). Plays an AudioClip at a given position in world space. Audio data to play. Position in world space from which sound originates. Playback volume (range from 0.0 - 1.0). Plays the clip with a delay specified in seconds. Users are advised to use this function instead of the old Play(delay) function that took a delay specified in samples relative to a reference rate of 44.1 kHz as an argument. Delay time specified in seconds. Plays an AudioClip, and scales the AudioSource volume by volumeScale. The clip being played. The scale of the volume. Unity automatically clamps negative scales to zero. Note: Scales larger than one might cause clipping. Plays an AudioClip, and scales the AudioSource volume by volumeScale. The clip being played. The scale of the volume. Unity automatically clamps negative scales to zero. Note: Scales larger than one might cause clipping. Enable the audio source to play through a specific gamepad. Slot number of the gamepad (0-3). Returns TRUE if enabling audio output through this users controller was successful. Plays the clip at a specific time on the absolute time-line that AudioSettings.dspTime reads from. Time in seconds on the absolute time-line that AudioSettings.dspTime refers to for when the sound should start playing. Sets a user-defined parameter of a custom ambisonic decoder effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be set. New value of the user-defined parameter. True, if the parameter could be set. Set the custom curve for the given AudioSourceCurveType. The curve type that should be set. The curve that should be applied to the given curve type. Changes the time at which a sound that has already been scheduled to play will end. Notice that depending on the timing not all rescheduling requests can be fulfilled. Time in seconds. Changes the time at which a sound that has already been scheduled to play will start. Time in seconds. Sets a user-defined parameter of a custom spatializer effect that is attached to an AudioSource. Zero-based index of user-defined parameter to be set. New value of the user-defined parameter. True, if the parameter could be set. Stops playing the clip. Unpause the paused playback of this AudioSource. This defines the curve type of the different custom curves that can be queried and set within the AudioSource. Custom Volume Rolloff. Reverb Zone Mix. The Spatial Blend. The 3D Spread. These are speaker types defined for use with AudioSettings.speakerMode. Channel count is set to 6. 5.1 speaker setup. This includes front left, front right, center, rear left, rear right and a subwoofer. Channel count is set to 8. 7.1 speaker setup. This includes front left, front right, center, rear left, rear right, side left, side right and a subwoofer. Channel count is set to 1. The speakers are monaural. Channel count is set to 2. Stereo output, but data is encoded in a way that is picked up by a Prologic/Prologic2 decoder and split into a 5.1 speaker setup. Channel count is set to 4. 4 speaker setup. This includes front left, front right, rear left, rear right. Channel count is unaffected. Channel count is set to 2. The speakers are stereo. This is the editor default. Channel count is set to 5. 5 speaker setup. This includes front left, front right, center, rear left, rear right. Describes when an AudioSource or AudioListener is updated. Updates the source or listener in the MonoBehaviour.FixedUpdate loop if it is attached to a Rigidbody, dynamic MonoBehaviour.Update otherwise. Updates the source or listener in the dynamic MonoBehaviour.Update loop. Updates the source or listener in the MonoBehaviour.FixedUpdate loop. Provides access to the audio samples generated by Unity objects such as VideoPlayer. Number of sample frames available for consuming with Experimental.Audio.AudioSampleProvider.ConsumeSampleFrames. The number of audio channels per sample frame. Pointer to the native function that provides access to audio sample frames. Enables the Experimental.Audio.AudioSampleProvider.sampleFramesAvailable events. If true, buffers produced by ConsumeSampleFrames will get padded when silence if there are less available than asked for. Otherwise, the extra sample frames in the buffer will be left unchanged. Number of sample frames that can still be written to by the sample producer before overflowing. Then the free sample count falls below this threshold, the Experimental.Audio.AudioSampleProvider.sampleFramesAvailable event and associated native is emitted. Unique identifier for this instance. The maximum number of sample frames that can be accumulated inside the internal buffer before an overflow event is emitted. Object where this provider came from. Invoked when the number of available sample frames goes beyond the threshold set with Experimental.Audio.AudioSampleProvider.freeSampleFrameCountLowThreshold. Number of available sample frames. Invoked when the number of available sample frames goes beyond the maximum that fits in the internal buffer. The number of sample frames that were dropped due to the overflow. The expected playback rate for the sample frames produced by this class. Index of the track in the object that created this provider. True if the object is valid. Clear the native handler set with Experimental.Audio.AudioSampleProvider.SetSampleFramesAvailableNativeHandler. Clear the native handler set with Experimental.Audio.AudioSampleProvider.SetSampleFramesOverflowNativeHandler. Consume sample frames from the internal buffer. Buffer where the consumed samples will be transferred. How many sample frames were written into the buffer passed in. Type that represents the native function pointer for consuming sample frames. Id of the provider. See Experimental.Audio.AudioSampleProvider.id. Pointer to the sample frames buffer to fill. The actual C type is float*. Number of sample frames that can be written into interleavedSampleFrames. Release internal resources. Inherited from IDisposable. Type that represents the native function pointer for handling sample frame events. User data specified when the handler was set. The actual C type is void*. Id of the provider. See Experimental.Audio.AudioSampleProvider.id. Number of sample frames available or overflowed, depending on event type. Delegate for sample frame events. Provider emitting the event. How many sample frames are available, or were dropped, depending on the event. Set the native event handler for events emitted when the number of available sample frames crosses the threshold. Pointer to the function to invoke when the event is emitted. User data to be passed to the handler when invoked. The actual C type is void*. Set the native event handler for events emitted when the internal sample frame buffer overflows. Pointer to the function to invoke when the event is emitted. User data to be passed to the handler when invoked. The actual C type is void*. Spectrum analysis windowing types. W[n] = 0.42 - 0.5 * cos(2π * nN) + 0.08 * cos(4π * nN). W[n] = 0.35875 - 0.48829 * cos(2π * nN) + 0.14128 * cos(4π * nN) - 0.01168 * cos(6π * n/N). W[n] = 0.54 - 0.46 * cos(2π * n/N). W[n] = 0.5 * (1.0 - cos(2π * n/N)). W[n] = 1.0. W[n] = 1 - abs(2n/N - 1). Gamepad audio output types. Audio output is through a secondary gamepad's vibration device if supported. Audio output is through the gamepads audio speaker if the gamepad supports playing audio. Audio output is through the gamepads vibration device if the gamepad supports playing audio as vibration. Use this class to record to an AudioClip using a connected microphone. A list of available microphone devices, identified by name. Stops recording. The name of the device. Get the frequency capabilities of a device. The name of the device. Returns the minimum sampling frequency of the device. Returns the maximum sampling frequency of the device. Get the position in samples of the recording. The name of the device. Query if a device is currently recording. The name of the device. Start Recording with device. The name of the device. Indicates whether the recording should continue recording if lengthSec is reached, and wrap around and record from the beginning of the AudioClip. Is the length of the AudioClip produced by the recording. The sample rate of the AudioClip produced by the recording. The function returns null if the recording fails to start. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. MovieTexture has been removed. Use VideoPlayer instead. The Audio module implements Unity's audio system. A structure describing the webcam device. Possible WebCamTexture resolutions for this device. A string identifier used to create a depth data based WebCamTexture. Returns true if the camera supports automatic focusing on points of interest and false otherwise. True if camera faces the same direction a screen does, false otherwise. Property of type WebCamKind denoting the kind of webcam device. A human-readable name of the device. Varies across different systems. Enum representing the different types of web camera device. Camera which supports synchronized color and depth data (currently these are only dual back and true depth cameras on latest iOS devices). A Telephoto camera device. These devices have a longer focal length than a wide-angle camera. Ultra wide angle camera. These devices have a shorter focal length than a wide-angle camera. The camera type is unknown. Wide angle (default) camera. WebCam Textures are textures onto which the live video input is rendered. This property allows you to set/get the auto focus point of the camera. This works only on Android and iOS devices. Set this to specify the name of the device to use. Return a list of available devices. Did the video buffer update this frame? This property is true if the texture is based on depth data. Returns if the camera is currently playing. Set the requested frame rate of the camera device (in frames per second). Set the requested height of the camera device. Set the requested width of the camera device. Returns an clockwise angle (in degrees), which can be used to rotate a polygon so camera contents are shown in correct orientation. Returns if the texture image is vertically flipped. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Create a WebCamTexture. The name of the video input device to be used. The requested width of the texture. The requested height of the texture. The requested frame rate of the texture. Gets the pixel color at coordinates (x, y). The x coordinate of the pixel to get. The range is 0 through the (texture width - 1). The y coordinate of the pixel to get. The range is 0 through the (texture height - 1). The pixel color. Gets the pixel color data for a mipmap level as Color structs. An array that contains the pixel colors. Gets the pixel color data for part of the texture as Color structs. The starting x position of the section to fetch. The starting y position of the section to fetch. The width of the section to fetch. The height of the section to fetch. An array that contains the pixel colors. Gets the pixel color data for a mipmap level as Color32 structs. An optional array to write the pixel data to. An array that contains the pixel colors. Gets the pixel color data for a mipmap level as Color32 structs. An optional array to write the pixel data to. An array that contains the pixel colors. Pauses the camera. Starts the camera. Stops the camera.