Otsu's method

{{short description|In computer vision and image processing}}

Image:Image processing post otsus algorithm.jpg

Image:Image processing pre otsus algorithm.jpg

In computer vision and image processing, Otsu's method, named after {{Nihongo|Nobuyuki Otsu|大津展之|Ōtsu Nobuyuki}}, is used to perform automatic image thresholding.{{cite journal

|author1=M. Sezgin |author2=B. Sankur

|name-list-style=amp | year = 2004

| title = Survey over image thresholding techniques and quantitative performance evaluation

| volume = 13

| issue = 1

| pages = 146–165

| journal = Journal of Electronic Imaging

| doi = 10.1117/1.1631315

|bibcode=2004JEI....13..146S

}} In the simplest form, the algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background. This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance.{{cite journal

| author = Nobuyuki Otsu

| year = 1979

| title = A threshold selection method from gray-level histograms

| volume = 9

| issue = 1

| pages = 62–66

| journal = IEEE Transactions on Systems, Man, and Cybernetics

| doi = 10.1109/TSMC.1979.4310076

| s2cid = 15326934

}} Otsu's method is a one-dimensional discrete analogue of Fisher's discriminant analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means{{Cite journal|last=Liu|first=Dongju|date=2009|title=Otsu method and K-means|journal=Ninth International Conference on Hybrid Intelligent Systems IEEE|volume=1|pages=344–349}} performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed.{{Cite journal|last=Liao|first=Ping-Sung|date=2001|title=A fast algorithm for multilevel thresholding|url=https://pdfs.semanticscholar.org/b809/14dbbee9f6b2455742d8117417731e6ecf12.pdf|archive-url=https://web.archive.org/web/20190624180800/https://pdfs.semanticscholar.org/b809/14dbbee9f6b2455742d8117417731e6ecf12.pdf|url-status=dead|archive-date=2019-06-24|journal=J. Inf. Sci. Eng.|volume=17|issue=5|pages=713–727|doi=10.6688/JISE.2001.17.5.1 |s2cid=9609430 }}{{Cite journal|last=Huang|first=Deng-Yuan|date=2009|title=Optimal multi-level thresholding using a two-stage Otsu optimization approach|journal=Pattern Recognition Letters|volume=30|issue=3|pages=275–284|doi=10.1016/j.patrec.2008.10.003|bibcode=2009PaReL..30..275H }}

Otsu's method

File:Otsu's Method Visualization.gif

The algorithm exhaustively searches for the threshold that minimizes the intra-class variance, defined as a weighted sum of variances of the two classes:

:\sigma^2_w(t)=\omega_0(t)\sigma^2_0(t)+\omega_1(t)\sigma^2_1(t)

Weights \omega_{0} and \omega_{1} are the probabilities of the two classes separated by a threshold t ,and \sigma^2_{0} and \sigma^2_{1} are variances of these two classes.

The class probability \omega_{\{0,1\}}(t) is computed from the L bins of the histogram:

:

\begin{align}

\omega_0(t) & =\sum_{i=0}^{t-1} p(i)\\[4pt]

\omega_1(t) & =\sum_{i=t}^{L-1} p(i)

\end{align}

For 2 classes, minimizing the intra-class variance is equivalent to maximizing inter-class variance:

:

\begin{align}

\sigma^2_b(t) & =\sigma^2-\sigma^2_w(t)=\omega_0(t)(\mu_0-\mu_T)^2+\omega_1(t)(\mu_1-\mu_T)^2 \\

& =\omega_0(t) \omega_1(t) \left[\mu_0(t)-\mu_1(t)\right]^2

\end{align}

which is expressed in terms of class probabilities \omega and class means \mu, where the class means \mu_0(t), \mu_1(t) and \mu_T are:

:

\begin{align}

\mu_0(t) & = \frac{\sum_{i=0}^{t-1} i p(i)}{\omega_0(t)} \\[4pt]

\mu_1(t) & = \frac{\sum_{i=t}^{L-1} i p(i)}{\omega_1(t)} \\

\mu_T & = \sum_{i=0}^{L-1} ip(i)

\end{align}

The following relations can be easily verified:

:

\begin{align}

\omega_0\mu_0+\omega_1\mu_1 & = \mu_T \\

\omega_0+\omega_1 & =1

\end{align}

The class probabilities and class means can be computed iteratively. This idea

yields an effective algorithm.

=Algorithm=

  1. Compute histogram and probabilities of each intensity level
  2. Set up initial \omega_i(0) and \mu_i(0)
  3. Step through all possible thresholds t = 1, \ldots maximum intensity
  4. Update \omega_i and \mu_i
  5. Compute \sigma^2_b(t)
  6. Desired threshold corresponds to the maximum \sigma^2_b(t)

=MATLAB implementation=

histogramCounts is a 256-element histogram of a grayscale image different gray-levels (typical for 8-bit images).

level is the threshold for the image (double).

function level = otsu(histogramCounts)

total = sum(histogramCounts); % total number of pixels in the image

%% OTSU automatic thresholding

top = 256;

sumB = 0;

wB = 0;

maximum = 0.0;

sum1 = dot(0:top-1, histogramCounts);

for ii = 1:top

wF = total - wB;

if wB > 0 && wF > 0

mF = (sum1 - sumB) / wF;

val = wB * wF * ((sumB / wB) - mF) * ((sumB / wB) - mF);

if ( val >= maximum )

level = ii;

maximum = val;

end

end

wB = wB + histogramCounts(ii);

sumB = sumB + (ii-1) * histogramCounts(ii);

end

end

Matlab has built-in functions graythresh() and multithresh() in the Image Processing Toolbox which are implemented with Otsu's method and Multi Otsu's method, respectively.

= Python implementation =

This implementation requires the NumPy library.

import numpy as np

def otsu_intraclass_variance(image, threshold):

"""

Otsu's intra-class variance.

If all pixels are above or below the threshold, this will throw a warning that can safely be ignored.

"""

return np.nansum(

[

np.mean(cls) * np.var(image, where=cls)

# weight · intra-class variance

for cls in [image >= threshold, image < threshold]

]

)

# NaNs only arise if the class is empty, in which case the contribution should be zero, which `nansum` accomplishes.

  1. Random image for demonstration:

image = np.random.randint(2, 253, size=(50, 50))

otsu_threshold = min(

range(np.min(image) + 1, np.max(image)),

key=lambda th: otsu_intraclass_variance(image, th),

)

Python libraries dedicated to image processing such as OpenCV and Scikit-image propose built-in implementations of the algorithm.

Limitations and variations

Otsu's method performs well when the histogram has a bimodal distribution with a deep and sharp valley between the two peaks.{{Cite journal |last1=Kittler |first1=J. |last2=Illingworth |first2=J. |date=September 1985 |title=On threshold selection using clustering criteria |url=http://dx.doi.org/10.1109/tsmc.1985.6313443 |journal=IEEE Transactions on Systems, Man, and Cybernetics |volume=SMC-15 |issue=5 |pages=652–655 |doi=10.1109/tsmc.1985.6313443 |s2cid=30272350 |issn=0018-9472}}

Like all other global thresholding methods, Otsu's method performs badly in case of heavy noise, small objects size, inhomogeneous lighting and larger intra-class than inter-class variance.{{cite journal |author=Lee, Sang Uk and Chung, Seok Yoon and Park, Rae Hong |year=1990 |title=A comparative performance study of several global thresholding techniques for segmentation |journal=Computer Vision, Graphics, and Image Processing |volume=52 |issue=2 |pages=171–190 |doi=10.1016/0734-189x(90)90053-x}} In those cases, local adaptations of the Otsu method have been developed.

Moreover, the mathematical grounding of Otsu's method models the histogram of the image as a mixture of two Normal distributions with equal variance and equal size.{{Cite journal |last1=Kurita |first1=T. |last2=Otsu |first2=N. |last3=Abdelmalek |first3=N. |date=October 1992 |title=Maximum likelihood thresholding based on population mixture models |url=http://dx.doi.org/10.1016/0031-3203(92)90024-d |journal=Pattern Recognition |volume=25 |issue=10 |pages=1231–1240 |doi=10.1016/0031-3203(92)90024-d |bibcode=1992PatRe..25.1231K |issn=0031-3203}} Otsu's thresholding may however yield satisfying results even when these assumptions are not met, in the same way statistical tests (to which Otsu's method is heavily connected{{Cite journal |last1=Jing-Hao Xue |last2=Titterington |first2=D. M. |date=August 2011 |title=t-Tests, F-Tests and Otsu's Methods for Image Thresholding |url=http://dx.doi.org/10.1109/tip.2011.2114358 |journal=IEEE Transactions on Image Processing |volume=20 |issue=8 |pages=2392–2396 |doi=10.1109/tip.2011.2114358 |pmid=21324779 |s2cid=10561633 |issn=1057-7149}}) can perform correctly even when the working assumptions are not fully satisfied.

Several variations of Otsu's methods have been proposed to account for more severe deviations from these assumptions, such as the Kittler-Illingworth method.{{Cite journal |last1=Kittler |first1=J. |last2=Illingworth |first2=J. |date=1986-01-01 |title=Minimum error thresholding |url=https://dx.doi.org/10.1016/0031-3203%2886%2990030-0 |journal=Pattern Recognition |language=en |volume=19 |issue=1 |pages=41–47 |doi=10.1016/0031-3203(86)90030-0 |bibcode=1986PatRe..19...41K |issn=0031-3203}}

= A variation for noisy images =

A popular local adaptation is the two-dimensional Otsu's method, which performs better for the object segmentation task in noisy images. Here, the intensity value of a given pixel is compared with the average intensity of its immediate neighborhood to improve segmentation results.{{cite journal

| author = Jianzhuang, Liu and Wenqing, Li and Yupeng, Tian

| year = 1991

| title = Automatic thresholding of gray-level pictures using two-dimension Otsu method

| pages = 325–327

| journal = Circuits and Systems, 1991. Conference Proceedings, China., 1991 International Conference on

}}

At each pixel, the average gray-level value of the neighborhood is calculated. Let the gray level of the given pixel be divided into L discrete values and the average gray level is also divided into the same L values. Then a pair is formed: the pixel gray level and the average of the neighborhood (i, j). Each pair belongs to one of the L\times L possible 2-dimensional bins. The total number of occurrences (frequency), f_{ij}, of a pair (i, j), divided by the total number of pixels in the image N, defines the joint probability mass function in a 2-dimensional histogram:

:P_{ij} = \frac{f_{ij}} N, \qquad \sum_{i=0}^{L-1}\sum_{j=0}^{L-1} P_{ij}=1

And the 2-dimensional Otsu's method is developed based on the 2-dimensional histogram as follows.

The probabilities of two classes can be denoted as:

:

\begin{align}

\omega_0 & = \sum_{i=0}^{s-1} \sum_{j=0}^{t-1} P_{ij} \\

\omega_1 & = \sum_{i=s}^{L-1} \sum_{j=t}^{L-1} P_{ij}

\end{align}

The intensity mean value vectors of two classes and total mean vector can be expressed as follows:

:

\begin{align}

\mu_0 & =[\mu_{0i}, \mu_{0j}]^T = \left[\sum_{i=0}^{s-1} \sum_{j=0}^{t-1} i \frac{P_{ij}}{\omega_0}, \sum_{i=0}^{s-1}\sum_{j=0}^{t-1} j \frac{P_{ij}}{ \omega_0} \right]^T \\

\mu_1 & =[\mu_{1i}, \mu_{1j}]^T = \left[\sum_{i=s}^{L-1}\sum_{j=t}^{L-1} i \frac{P_{ij}}{\omega_1}, \sum_{i=s}^{L-1}\sum_{j=t}^{L-1} j \frac{P_{ij}}{\omega_1} \right]^T \\

\mu_T & =[\mu_{Ti}, \mu_{Tj}]^T = \left[\sum_{i=0}^{L-1} \sum_{j=0}^{L-1} i P_{ij}, \sum_{i=0}^{L-1}\sum_{j=0}^{L-1} j P_{ij}\right]^T

\end{align}

In most cases the probability off-diagonal will be negligible, so it is easy to verify:

:\omega_0+\omega_1 \cong 1

:\omega_0\mu_0+\omega_1\mu_1 \cong \mu_T

The inter-class discrete matrix is defined as

:S_b = \sum_{k=0}^1\omega_k[(\mu_k-\mu_T)(\mu_k-\mu_T)^T]

The trace of the discrete matrix can be expressed as

:

\begin{align}

& \operatorname{tr}(S_b) \\[4pt]

= {} & \omega_0[(\mu_{0i}-\mu_{Ti})^2 + (\mu_{0j}-\mu_{Tj})^2] + \omega_1[(\mu_{1i}-\mu_{Ti})^2 + (\mu_{1j}-\mu_{Tj})^2] \\[4pt]

= {} & \frac{(\mu_{Ti}\omega_0-\mu_i)^2 + (\mu_{Tj}\omega_0-\mu_j)^2}{\omega_0(1-\omega_0)}

\end{align}

where

:\mu_i = \sum_{i=0}^{s-1}\sum_{j=0}^{t-1}iP_{ij}

:\mu_j = \sum_{i=0}^{s-1}\sum_{j=0}^{t-1}jP_{ij}

Similar to one-dimensional Otsu's method, the optimal threshold (s,t) is obtained by maximizing \operatorname{tr}(S_b).

== Algorithm ==

The s and t is obtained iteratively which is similar with one-dimensional Otsu's method. The values of s and t are changed till we obtain the maximum of \operatorname{tr}(S_b), that is

max,s,t = 0;

for ss: 0 to L-1 do

for tt: 0 to L-1 do

evaluate tr(S_b);

if tr(S_b) > max

max = tr(S,b);

s = ss;

t = tt;

end if

end for

end for

return s,t;

Notice that for evaluating \operatorname{tr}(S_b), we can use a fast recursive dynamic programming algorithm to improve time performance.{{cite book

|author1=Zhang, Jun |author2=Hu, Jinglu

|title=2008 International Conference on Computer Science and Software Engineering

|chapter=Image Segmentation Based on 2D Otsu Method with Histogram Analysis

|name-list-style=amp | year = 2008

| volume = 6

| pages = 105–108

|doi=10.1109/CSSE.2008.206

|isbn=978-0-7695-3336-0

|s2cid=14982308

}} However, even with the dynamic programming approach, 2d Otsu's method still has large time complexity. Therefore, much research has been done to reduce the computation cost.{{cite journal

| author = Zhu, Ningbo and Wang, Gang and Yang, Gaobo and Dai, Weiming

| year = 2009

| title = A fast 2d otsu thresholding algorithm based on improved histogram

| pages = 1–5

| journal = Pattern Recognition, 2009. CCPR 2009. Chinese Conference on

}}

If summed area tables are used to build the 3 tables, sum over P_{ij}, sum over i * P_{ij},

and sum over j * P_{ij}, then the runtime complexity is the maximum of (O(N_pixels), O(N_bins*N_bins)).

Note that if only coarse resolution is needed in terms of threshold, N_bins can be reduced.

{{See also|Summed-area table}}

== MATLAB implementation ==

function inputs and output:

hists is a 256\times 256 2D-histogram of grayscale value and neighborhood average grayscale value pair.

total is the number of pairs in the given image.it is determined by the number of the bins of 2D-histogram at each direction.

threshold is the threshold obtained.

function threshold = otsu_2D(hists, total)

maximum = 0.0;

threshold = 0;

helperVec = 0:255;

mu_t0 = sum(sum(repmat(helperVec',1,256).*hists));

mu_t1 = sum(sum(repmat(helperVec,256,1).*hists));

p_0 = zeros(256);

mu_i = p_0;

mu_j = p_0;

for ii = 1:256

for jj = 1:256

if jj == 1

if ii == 1

p_0(1,1) = hists(1,1);

else

p_0(ii,1) = p_0(ii-1,1) + hists(ii,1);

mu_i(ii,1) = mu_i(ii-1,1)+(ii-1)*hists(ii,1);

mu_j(ii,1) = mu_j(ii-1,1);

end

else

p_0(ii,jj) = p_0(ii,jj-1)+p_0(ii-1,jj)-p_0(ii-1,jj-1)+hists(ii,jj); % THERE IS A BUG HERE. INDICES IN MATLAB MUST BE HIGHER THAN 0. ii-1 is not valid

mu_i(ii,jj) = mu_i(ii,jj-1)+mu_i(ii-1,jj)-mu_i(ii-1,jj-1)+(ii-1)*hists(ii,jj);

mu_j(ii,jj) = mu_j(ii,jj-1)+mu_j(ii-1,jj)-mu_j(ii-1,jj-1)+(jj-1)*hists(ii,jj);

end

if (p_0(ii,jj) == 0)

continue;

end

if (p_0(ii,jj) == total)

break;

end

tr = ((mu_i(ii,jj)-p_0(ii,jj)*mu_t0)^2 + (mu_j(ii,jj)-p_0(ii,jj)*mu_t1)^2)/(p_0(ii,jj)*(1-p_0(ii,jj)));

if ( tr >= maximum )

threshold = ii;

maximum = tr;

end

end

end

end

= A variation for unbalanced images =

When the levels of gray of the classes of the image can be considered as Normal distributions but with unequal size and/or unequal variances, assumptions for the Otsu algorithm are not met. The Kittler-Illingworth algorithm (also known as Minimum Error thresholding) is a variation of Otsu's method to handle such cases. There are several ways to mathematically describe this algorithm. One of them is to consider that for each threshold being tested, the parameters of the Normal distributions in the resulting binary image are estimated by Maximum likelihood estimation given the data.

While this algorithm could seem superior to Otsu's method, it introduces new parameters to be estimated, and this can result in the algorithm being over-parametrized and thus unstable. In many cases where the assumptions from Otsu's method seem at least partially valid, it may be preferable to favor Otsu's method over the Kittler-Illingworth algorithm, following Occam's razor.

File:Triclass separation of an image into three classes.png

= Iterative Triclass Thresholding Based on the Otsu's Method =

One limitation of the Otsu’s method is that it cannot segment weak objects as the method searches for a single threshold to separate an image into two classes, namely, foreground and background, in one shot. Because the Otsu’s method looks to segment an image with one threshold, it tends to bias toward the class with the large variance.{{cite journal

| author = Xu, Xiangyang, Xu, Shengzhou, Jin, Lianghai, and Song, Enmin

| year = 2011

| title = Characteristic analysis of Otsu threshold and its applications

| volume = 32

| number = 7

| pages = 956–61

| journal = Pattern Recognition Letters

| doi =10.1016/j.patrec.2011.01.021

| bibcode = 2011PaReL..32..956X

}}

Iterative triclass thresholding algorithm is a variation of the Otsu’s method to circumvent this limitation.{{cite journal

| author = Cai, Hongmin, Yang, Zhong, Cao, Xinhua, Xia, Weiming, and Xu, Xiaoyin.

| year = 2014

| title = A new iterative triclass thresholding technique in image segmentation

| volume = 23

| number = 3

| pages = 1038–46

| journal = IEEE Transactions on Image Processing

| doi=10.1109/TIP.2014.2298981

| pmid = 24474373

| s2cid = 2242995

}}

Given an image, at the first iteration, the triclass thresholding algorithm calculates a threshold \eta_1 using the Otsu’s method. Based on threshold \eta_1, the algorithm calculates mean \mu_{upper}^{[1]} of pixels above \eta_1 and mean \mu_{lower}^{[1]} of pixels below \eta_1. Then the algorithm tentatively separates the image into three classes (hence the name triclass), with the pixels above the upper mean \mu_{upper}^{[1]} designated as the temporary foreground F class and pixels below the lower mean \mu_{lower}^{[1]} designated as the temporary background B class. Pixels fall between [\mu_{lower}^{[1]}, \mu_{upper}^{[1]}] are denoted as a to-be-determined (TBD) region. This completes the first iteration of the algorithm. For the second iteration, the Otsu’s method is applied to the TBD region only to obtain a new threshold \eta_2. The algorithm then calculates the mean \mu_{upper}^{[2]} of pixels in the TBD region that are above \eta_2 and the mean \mu_{lower}^{[2]} of pixels in the TBD region that are below \eta_2. Pixels in the TBD region that are greater than the upper mean \mu_{upper}^{[2]} are added to the temporary foreground F. And pixels in the TBD region that are less than the lower mean \mu_{lower}^{[2]} are added to the temporary background B. Similarly, a new TBD region is obtained, which contains all the pixels falling between [\mu_{lower}^{[2]}, \mu_{upper}^{[2]}]. This completes the second iteration. The algorithm then proceeds to the next iteration to process the new TBD region until it meets the stopping criterion. The criterion is that, when the difference between Otsu’s thresholds computed from two consecutive iterations is less than a small number, the iteration shall stop. For the last iteration, pixels above \eta_n are assigned to the foreground class and pixels below the threshold are assigned to the background class. At the end, all the temporary foreground pixels are combined to constitute the final foreground. All the temporary background pixels are combined to become the final background. In implementation, the algorithm involves no parameter except for the stopping criterion in terminating the iterations. By iteratively applying the Otsu’s method and gradually shrinking the TBD region for segmentation, the algorithm can obtain a result that preserves weak objects better than the standard Otsu’s method does.

References

{{reflist}}