In this document
This page focuses on the contributors to output latency, but a similar discussion applies to input latency.
Assuming the analog circuitry does not contribute significantly, then the major surface-level contributors to audio latency are the following:
- Total number of buffers in pipeline
- Size of each buffer, in frames
- Additional latency after the app processor, such as from a DSP
As accurate as the above list of contributors may be, it is also misleading. The reason is that buffer count and buffer size are more of an effect than a cause. What usually happens is that a given buffer scheme is implemented and tested, but during testing, an audio underrun or overrun is heard as a "click" or "pop." To compensate, the system designer then increases buffer sizes or buffer counts. This has the desired result of eliminating the underruns or overruns, but it also has the undesired side effect of increasing latency. For more information about buffer sizes, see the video Audio latency: buffer sizes.
A better approach is to understand the causes of the underruns and overruns, and then correct those. This eliminates the audible artifacts and may permit even smaller or fewer buffers and thus reduce latency.
In our experience, the most common causes of underruns and overruns include:
- Linux CFS (Completely Fair Scheduler)
- high-priority threads with SCHED_FIFO scheduling
- priority inversion
- long scheduling latency
- long-running interrupt handlers
- long interrupt disable time
- power management
- security kernels
Linux CFS and SCHED_FIFO scheduling
The Linux CFS is designed to be fair to competing workloads sharing a common CPU resource. This fairness is represented by a per-thread nice parameter. The nice value ranges from -19 (least nice, or most CPU time allocated) to 20 (nicest, or least CPU time allocated). In general, all threads with a given nice value receive approximately equal CPU time and threads with a numerically lower nice value should expect to receive more CPU time. However, CFS is "fair" only over relatively long periods of observation. Over short-term observation windows, CFS may allocate the CPU resource in unexpected ways. For example, it may take the CPU away from a thread with numerically low niceness onto a thread with a numerically high niceness. In the case of audio, this can result in an underrun or overrun.
The obvious solution is to avoid CFS for high-performance audio
threads. Beginning with Android 4.1, such threads now use the
SCHED_FIFO scheduling policy rather than the
SCHED_NORMAL (also called
SCHED_OTHER) scheduling policy implemented by CFS.
Though the high-performance audio threads now use
are still susceptible to other higher priority
These are typically kernel worker threads, but there may also be a few
non-audio user threads with policy
SCHED_FIFO. The available
priorities range from 1 to 99. The audio threads run at priority
2 or 3. This leaves priority 1 available for lower priority threads,
and priorities 4 to 99 for higher priority threads. We recommend
you use priority 1 whenever possible, and reserve priorities 4 to 99 for
those threads that are guaranteed to complete within a bounded amount
of time, execute with a period shorter than the period of audio threads,
and are known to not interfere with scheduling of audio threads.
For more information on the theory of assignment of fixed priorities, see the Wikipedia article Rate-monotonic scheduling (RMS). A key point is that fixed priorities should be allocated strictly based on period, with higher priorities assigned to threads of shorter periods, not based on perceived "importance." Non-periodic threads may be modeled as periodic threads, using the maximum frequency of execution and maximum computation per execution. If a non-periodic thread cannot be modeled as a periodic thread (for example it could execute with unbounded frequency or unbounded computation per execution), then it should not be assigned a fixed priority as that would be incompatible with the scheduling of true periodic threads.
Priority inversion is a classic failure mode of real-time systems, where a higher-priority task is blocked for an unbounded time waiting for a lower-priority task to release a resource such as (shared state protected by) a mutex. See the article "Avoiding priority inversion" for techniques to mitigate it.
Scheduling latency is the time between when a thread becomes ready to run and when the resulting context switch completes so that the thread actually runs on a CPU. The shorter the latency the better, and anything over two milliseconds causes problems for audio. Long scheduling latency is most likely to occur during mode transitions, such as bringing up or shutting down a CPU, switching between a security kernel and the normal kernel, switching from full power to low-power mode, or adjusting the CPU clock frequency and voltage.
In many designs, CPU 0 services all external interrupts. So a
long-running interrupt handler may delay other interrupts, in particular
audio direct memory access (DMA) completion interrupts. Design interrupt handlers
to finish quickly and defer lengthy work to a thread (preferably
a CFS thread or
SCHED_FIFO thread of priority 1).
Equivalently, disabling interrupts on CPU 0 for a long period has the same result of delaying the servicing of audio interrupts. Long interrupt disable times typically happen while waiting for a kernel spin lock. Review these spin locks to ensure they are bounded.
Power, performance, and thermal management
Power management is a broad term that encompasses efforts to monitor and reduce power consumption while optimizing performance. Thermal management and computer cooling are similar but seek to measure and control heat to avoid damage due to excess heat. In the Linux kernel, the CPU governor is responsible for low-level policy, while user mode configures high-level policy. Techniques used include:
- dynamic voltage scaling
- dynamic frequency scaling
- dynamic core enabling
- cluster switching
- power gating
- hotplug (hotswap)
- various sleep modes (halt, stop, idle, suspend, etc.)
- process migration
- processor affinity
Some management operations can result in "work stoppages" or times during which there is no useful work performed by the application processor. These work stoppages can interfere with audio, so such management should be designed for an acceptable worst-case work stoppage while audio is active. Of course, when thermal runaway is imminent, avoiding permanent damage is more important than audio!
A security kernel for Digital rights management (DRM) may run on the same application processor core(s) as those used for the main operating system kernel and application code. Any time during which a security kernel operation is active on a core is effectively a stoppage of ordinary work that would normally run on that core. In particular, this may include audio work. By its nature, the internal behavior of a security kernel is inscrutable from higher-level layers, and thus any performance anomalies caused by a security kernel are especially pernicious. For example, security kernel operations do not typically appear in context switch traces. We call this "dark time" — time that elapses yet cannot be observed. Security kernels should be designed for an acceptable worst-case work stoppage while audio is active.