Overlap–save method

In signal processing, overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x[n] and a finite impulse response (FIR) filter h[n]:

{{Equation box 1

|indent= |cellpadding= 0 |border= 0 |background colour=white

|equation={{NumBlk|:|

y[n] = x[n] * h[n]

\ \triangleq\ \sum_{m=-\infty}^{\infty} h[m] \cdot x[n - m]

= \sum_{m=1}^{M} h[m] \cdot x[n - m],

   

|{{EquationRef|Eq.1}}}}}}

where {{nowrap|h[m] {{=}} 0}} for m outside the region {{nowrap|[1, M]}}.

This article uses common abstract notations, such as y(t) = x(t) * h(t), or y(t) = \mathcal{H}\{x(t)\}, in which it is understood that the functions should be thought of in their totality, rather than at specific instants t (see Convolution#Notation).

Image:Overlap-save algorithm.svg, Fig 2.35, fourth trace.}} The FIR filter is a boxcar lowpass with M=16 samples, the length of the segments is L=100 samples and the overlap is 15 samples.]]

The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. That requires longer input segments that overlap the next input segment. The overlapped data gets "saved" and used a second time. First we describe that process with just conventional convolution for each output segment. Then we describe how to replace that convolution with a more efficient method.

Consider a segment that begins at n = kL + M, for any integer k, and define:

:x_k[n] \ \triangleq

\begin{cases}

x[n+kL], & 1 \le n \le L+M-1\\

0, & \textrm{otherwise}.

\end{cases}

:y_k[n] \ \triangleq \ x_k[n]*h[n] = \sum_{m=1}^{M} h[m] \cdot x_k[n-m].

Then, for kL+M+1 \le n \le kL+L+M, and equivalently M+1 \le n-kL \le L+M, we can write:

:y[n] = \sum_{m=1}^{M} h[m] \cdot x_k[n-kL-m] \ \ \triangleq \ \ y_k[n-kL].

With the substitution j = n-kL, the task is reduced to computing y_k[j] for M+1 \le j \le L+M. These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1  ≤  {{mvar|j}}  ≤  'L''.{{efn-ua

|Shifting the undesirable edge effects to the last M-1 outputs is a potential run-time convenience, because the IDFT can be computed in the y[n] buffer, instead of being computed and copied. Then the edge effects can be overwritten by the next IDFT.  A subsequent footnote explains how the shift is done, by a time-shift of the impulse response.}}'''

If we periodically extend xk[n] with period N  ≥  L + M − 1, according to:

:x_{k,N}[n] \ \triangleq \ \sum_{\ell=-\infty}^{\infty} x_k[n - \ell N],

the convolutions  (x_{k,N})*h\,  and  x_k*h\,  are equivalent in the region M+1 \le n \le L+M . It is therefore sufficient to compute the N-point circular (or cyclic) convolution of x_k[n]\, with h[n]\,  in the region [1, N].  The subregion [M + 1, L + M] is appended to the output stream, and the other values are discarded.  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem:

{{Equation box 1

|indent= |cellpadding= 0 |border= 0 |background colour=white

|equation={{NumBlk|:|y_k[n]\ =\ \scriptstyle \text{IDFT}_N \displaystyle (\ \scriptstyle \text{DFT}_N \displaystyle (x_k[n])\cdot\ \scriptstyle \text{DFT}_N \displaystyle (h[n])\ ),    

|{{EquationRef|Eq.2}}}}}}

where:

  • DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and
  • {{math|L}} is customarily chosen such that {{math|N {{=}} L+M-1}} is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency.
  • The leading and trailing edge-effects of circular convolution are overlapped and added,{{efn-ua

|Not to be confused with the Overlap-add method, which preserves separate leading and trailing edge-effects.

}} and subsequently discarded.{{efn-ua

|1=The edge effects can be moved from the front to the back of the IDFT output by replacing \scriptstyle \text{DFT}_N \displaystyle (h[n]) with \scriptstyle \text{DFT}_N \displaystyle (h[n+M-1]) =\ \scriptstyle \text{DFT}_N \displaystyle (h[n+M-1-N]), meaning that the N-length buffer is circularly-shifted (rotated) by M-1 samples. Thus the h(M) element is at n=1. The h(M-1) element is at n=N. h(M-2) is at n=N-1. Etc.}}

Pseudocode

(Overlap-save algorithm for linear convolution)

h = FIR_impulse_response

M = length(h)

overlap = M − 1

N = 8 × overlap (see next section for a better choice)

step_size = N − overlap

H = DFT(h, N)

position = 0

while position + N ≤ length(x)

yt = IDFT(DFT(x(position+(1:N))) × H)

y(position+(1:step_size)) = yt(M : N) (discard M−1 y-values)

position = position + step_size

end

Efficiency considerations

Image:FFT_size_vs_filter_length_for_Overlap-add_convolution.svg

When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about {{nowrap|N (log2(N) + 1)}} complex multiplications for the FFT, product of arrays, and IFFT.{{efn-ua

|1=Cooley–Tukey FFT algorithm for N=2k needs (N/2) log2(N) – see FFT – Definition and speed

}} Each iteration produces {{nowrap|N-M+1}} output samples, so the number of complex multiplications per output sample is about:

{{Equation box 1

|indent= |cellpadding= 0 |border= 0 |background colour=white

|equation={{NumBlk|:|\frac{N (\log_2(N) + 1)}{N-M+1}.\,    

|{{EquationRef|Eq.3}}}}}}

For example, when M=201 and N=1024, {{EquationNote|Eq.3}} equals 13.67, whereas direct evaluation of {{EquationNote|Eq.1}} would require up to 201 complex multiplications per output sample, the worst case being when both x and h are complex-valued. Also note that for any given M, {{EquationNote|Eq.3}} has a minimum with respect to N. Figure 2 is a graph of the values of N that minimize {{EquationNote|Eq.3}} for a range of filter lengths (M).

Instead of {{EquationNote|Eq.1}}, we can also consider applying {{EquationNote|Eq.2}} to a long sequence of length N_x samples. The total number of complex multiplications would be:

:N_x\cdot (\log_2(N_x) + 1).

Comparatively, the number of complex multiplications required by the pseudocode algorithm is:

:N_x\cdot (\log_2(N) + 1)\cdot \frac{N}{N-M+1}.

Hence the cost of the overlap–save method scales almost as O\left(N_x\log_2 N\right) while the cost of a single, large circular convolution is almost O\left(N_x\log_2 N_x \right).

Overlap–discard

Overlap–discard and Overlap–scrap are less commonly used labels for the same method described here. However, these labels are actually better (than overlap–save) to distinguish from overlap–add, because both methods "save", but only one discards. "Save" merely refers to the fact that M − 1 input (or output) samples from segment k are needed to process segment k + 1.

=Extending overlap–save=

The overlap–save algorithm can be extended to include other common operations of a system:{{efn-ua

|Carlin et al. 1999, p 31, col 20.

}}

  • additional IFFT channels can be processed more cheaply than the first by reusing the forward FFT
  • sampling rates can be changed by using different sized forward and inverse FFTs
  • frequency translation (mixing) can be accomplished by rearranging frequency bins

See also

Notes

{{notelist-ua}}

References

{{reflist|refs=

{{cite book |author=Harris, F.J. |year=1987 |title=Handbook of Digital Signal Processing |editor=D.F.Elliot |location=San Diego |publisher=Academic Press |pages=633–699 |isbn=0122370759

}}

{{cite web|url=https://www.dsprelated.com/freebooks/sasp/Overlap_Add_OLA_STFT_Processing.html|title=Overlap-Add (OLA) STFT Processing {{!}} Spectral Audio Signal Processing |website=www.dsprelated.com |access-date=2024-03-02 |quote=The name overlap-save comes from the fact that L-1 samples of the previous frame [here: M-1 samples of the current frame] are saved for computing the next frame.

}}

{{cite book |author=Frerking, Marvin |year=1994 |title=Digital Signal Processing in Communication Systems |location=New York |publisher=Van Nostrand Reinhold |isbn=0442016166

}}

{{cite journal

| last =Borgerding |first=Mark |title=Turning Overlap–Save into a Multiband Mixing, Downsampling Filter Bank

| journal =IEEE Signal Processing Magazine |issue= March 2006 |pages=158–161 |year=2006

|volume=23 |doi=10.1109/MSP.2006.1598092 | url =https://ieeexplore.ieee.org/document/1598092

}}

}}

{{refbegin}}

  1. {{Cite book

| ref=refRabiner

| author1=Rabiner, Lawrence R.

| author2=Gold, Bernard

| title=Theory and application of digital signal processing

| year=1975

| publisher=Prentice-Hall

| location=Englewood Cliffs, N.J.

| isbn=0-13-914101-4

| chapter=2.25

| pages=[https://archive.org/details/theoryapplicatio00rabi/page/63 63–67]

| chapter-url-access=registration

| chapter-url=https://archive.org/details/theoryapplicatio00rabi/page/67

}}

  1. {{cite patent

|ref=refCarlin

|title=Wideband communication intercept and direction finding device using hyperchannelization

|invent1=Carlin, Joe

|invent2=Collins, Terry

|invent3=Hays, Peter

|invent4=Hemmerdinger, Barry E. Kellogg, Robert L. Kettig, Robert L. Lemmon, Bradley K. Murdock, Thomas E. Tamaru, Robert S. Ware, Stuart M.

|pubdate=1999-12-10

|fdate=1999-12-10

|gdate=2005-05-24

|country=US

|status=patent

|number=6898235

}}, also available at https://patentimages.storage.googleapis.com/4d/39/2a/cec2ae6f33c1e7/US6898235.pdf

{{refend}}