Image histogram equalization algorithm. Basic image processing operations. Geometric transformation and image registration

Hi all. Now we are with supervisor we are preparing a monograph for publication, where we are trying in simple words talk about the basics of digital image processing. This article reveals a very simple, but at the same time very effective technique for improving image quality - histogram equalization.

For simplicity, let's start with monochrome images (i.e., images containing information only about the brightness, but not about the color of the pixels). An image histogram is a discrete function H defined on the set of values ​​, where bpp is the number of bits allocated to encode the brightness of one pixel. Although it is not mandatory, histograms are often normalized to the range by dividing each value of the H[i] function by total image pixels. In Table. 1 shows examples of test images and histograms built on their basis:
Tab. 1. Images and their histograms

Having carefully studied the corresponding histogram, we can draw some conclusions about the original image itself. For example, histograms of very dark images are characterized by the fact that non-zero values ​​of the histogram are concentrated near zero brightness levels, and vice versa for very light images - all non-zero values ​​are concentrated on the right side of the histogram.
Intuitively, we can conclude that the most convenient image for human perception will be an image whose histogram is close to a uniform distribution. Those. to improve the visual quality of the image, it is necessary to apply such a transformation so that the histogram of the result contains all possible brightness values ​​and, at the same time, in approximately the same amount. This transformation is called histogram equalization and can be done using the code in Listing 1.
Listing 1. Implementing a histogram equalization procedure

  1. procedure TCGrayscaleImage. Histogram Equalization ;
  2. const
  3. k = 255
  4. h: array [ 0 .. k ] of double ;
  5. i, j: word;
  6. begin
  7. for i := 0 to k do
  8. h[i] := 0 ;
  9. h[ round (k * self . Pixels [ i, j] ) ] : = h[ round (k * self . Pixels [ i, j] ) ] + 1 ;
  10. for i := 0 to k do
  11. h[ i] : = h[ i] / (self . Height * self . Width ) ;
  12. for i := 1 to k do
  13. h[ i] : = h[ i - 1 ] + h[ i] ;
  14. for i := 0 to self . Height - 1 do
  15. for j := 0 to self . Width - 1 do
  16. self . Pixels [ i, j] : = h[ round (k * self . Pixels [ i, j] ) ] ;
  17. end ;

As a result of equalizing the histogram, in most cases the dynamic range of the image is significantly expanded, which makes it possible to display previously unnoticed details. This effect is especially pronounced on dark images, as shown in Table. 2. In addition, it is worth noting one more important feature of the equalization procedure: unlike most filters and gradation transformations that require setting parameters (aperture and gradation constants), the histogram equalization can be performed in a fully automatic mode without operator participation.
Tab. 2. Images and their histograms after equalization


You can easily see that the histograms after equalization have a kind of noticeable discontinuities. This is due to the fact that the dynamic range of the output image is wider than that of the original image. Obviously, in this case, the mapping considered in Listing 1 cannot provide non-zero values ​​in all histogram bins. If you still need to achieve a more natural appearance of the output histogram, you can use a random distribution of the values ​​of the i-th histogram bin in some of its neighborhood.
Obviously, histogram equalization makes it easy to improve the quality of monochrome images. Naturally, I would like to apply a similar mechanism to color images.
Most inexperienced developers represent the image as three RGB color channels and try to apply the histogram equalization procedure to each color individually. In some rare cases, this allows you to succeed, but in most cases the result is so-so (colors are unnatural and cold). This is because the RGB model does not accurately represent human color perception.
Let's think about another color space - HSI. This color model (and others related to it) is very widely used by illustrators and designers as it allows them to operate with more familiar concepts of hue, saturation and intensity.
If we consider the projection of the RGB cube in the direction of the white-black diagonal, then we get a hexagon, the corners of which correspond to the primary and secondary colors, and all gray shades (lying on the diagonal of the cube) are projected to the central point of the hexagon (see Fig. 1):

Rice. 1. Color cube projection
In order to be able to encode all the colors available in the RGB model using this model, you need to add a vertical lightness (or intensity) axis (I). The result is a hexagonal cone (Fig. 2, Fig. 3):


Rice. 2. Pyramid HSI (tops)
In this model, hue (H) is given by the angle relative to the red axis, saturation (S) characterizes the purity of the color (1 means completely pure color, and 0 corresponds to a shade of gray). At a saturation value of zero, the hue has no meaning and is undefined.


Rice. 3. Pyramid HSI
In Table. Figure 3 shows the decomposition of the image into HSI components (white pixels in the tone channel correspond to zero saturation):
Tab. 3. HSI color space


It is believed that to improve the quality of color images, it is most effective to apply the equalization procedure to the intensity channel. This is exactly what is shown in Table. four
Tab. 4. Equalization of various color channels


I hope you found this material at least interesting, at most useful. Thank you.

COMPARISON OF EQUALIZATION ALGORITHMS

HISTOGRAMS OF GRAY GRAY IMAGES

1 "2 Alexandrovskaya A.A., Mavrin E.M.

1 Alexandrovskaya Anna Andreevna - Master's student; Mavrin Evgeny Mikhailovich - Master's student, department information systems and telecommunications,

Faculty of Informatics and Control Systems, Moscow State Technical University. N.E. Bauman, Moscow

Abstract: this article compares digital image processing algorithms, namely histogram equalization algorithms. Three algorithms are considered: global histogram equalization (NOT), adaptive histogram equalization (AHE), contrast-limited adaptive histogram equalization (CHANE). The result of the work described in the article is a visual comparison of the algorithms on identical images.

Keywords: image histogram, histogram image equalization, COI, computer vision, ANE, CHANE.

To improve image quality, it is necessary to increase the brightness range, contrast, sharpness, clarity. Together, these parameters can be improved by equalizing the histogram of an image. When determining the contours of objects, in most cases, the data contained in the halftone image is sufficient. A grayscale image is an image that only contains information about the brightness, but not about the color of the pixels. Accordingly, it is advisable to build a histogram for a grayscale image.

Let the image under consideration consist of n pixels with intensity (brightness) r in the range from 0 to 2bpp, where bpp is the number of bits allocated for coding the brightness of one pixel. In most color models for coding

brightness of one color of one pixel requires 1 byte. Accordingly, the intensity of the pixel is defined on the set from 0 to 255 . A graph of the dependence of the number of pixels in an image with intensity r on the intensity itself is called the histogram of the image. On fig. 1 shows an example of test images and histograms built on the basis of these images:

Rice. 1. Test images and their histograms

Obviously, having studied the corresponding histogram, one can draw conclusions about the original image. For example, the histograms of very dark images are characterized by a concentration of non-zero values ​​of the histogram around zero brightness levels, while for light images, on the contrary, all non-zero values ​​are collected on the right side of the histogram.

Histogram equalization algorithms are popular algorithms for improving a processed grayscale image. In general, HE-algorithms (Histogram Equalization) have a relatively low computational cost and at the same time show high efficiency. The essence of this type of algorithms is to adjust the levels of a grayscale image in accordance with the probability distribution function of the given image (1) and, as a result, the dynamic range of the brightness distribution increases. This leads to improved visual effects,

such as: brightness contrast, sharpness, clarity.

p(i) = -, i = 0. .255, p

where p(i) is the probability of the appearance of a pixel with brightness i, the normalized function of the histogram of the original image, k is the pixel coordinates of the processed image, g(k) is the equalized image.

Histogram equalization algorithms are divided into two types: local (adaptive) histogram equalization and global histogram equalization. In the global method, one chart is built and the histogram of the entire image is equalized (Fig. 3a). In the local method (Fig. 3b), a large number of histograms are constructed, where each histogram corresponds to only a part of the processed image. This method improves local contrast.

images, resulting in overall better processing results.

Local processing algorithms can be divided into the following types: overlapping local processing blocks, non-overlapping local processing blocks, and partially overlapping local processing blocks (Fig. 2).

Rice. Fig. 2. Illustration of the operation of various types of local image processing algorithms: a) overlapping local processing blocks, b) non-overlapping local processing blocks, c) partially overlapping local processing blocks

The overlapping block algorithm gives best result processing, but is the slowest among those listed. The algorithm of non-overlapping blocks, on the contrary, requires less time for processing, all other things being equal, but since the processed blocks do not overlap, sharp changes in brightness in the final image are possible. A compromise solution is the algorithm of partially overlapping blocks. The disadvantages of adaptive histogram equalization algorithms include over-amplification of image parameters and the possible increase in noise in the final image due to this.

An improved version of the above algorithm is the contrast limited adaptive histogram equalization (CLAHE) algorithm (Fig. 4c). The main feature of this algorithm is the restriction

the range of the histogram based on the analysis of the brightness values ​​of the pixels in the processed block (2), thus the resulting image looks more natural and less noisy.

where add is the increment factor of the value of the histogram function, ps is the number of pixels that exceed the threshold value. An illustration of the change in the histogram is shown in Figure 3.

Rice. 3. Histogram range limitation in the CLAHE algorithm

It should be noted that the classical SLIB algorithm uses bilinear interpolation to eliminate boundaries between processed blocks.

Rice. Fig. 4. Results of histogram equalization algorithms: a) global histogram equalization (NOT), b) adaptive histogram equalization (AHE), c) contrast-limited adaptive histogram equalization (CHANE)

When visually comparing the processing results, the best method is CLAHE (Fig. 3c). The image processed by this method has less noise than the image processed by the AHE method, and the brightness contrast is more natural. Compared to the image processed by the global equalization method, the CLAHE method improves the clarity of small and blurry details of the processed image, and also increases the contrast, but not as exaggerated as in the case of the AHE method. Also below is a table for estimating the execution time of the considered methods in the MATLAB 2016 programming environment.

Table 1

Lead time

Program name with Execution time

method by the method under consideration, c of the method, c

CLAHE 0.609 0.519

Bibliography

1. Chichvarin N.V. Detection and recognition of signals // National Library. N.E. Bauman [ Electronic resource] 2016, Access mode: https://ru.bmstu.wiki/Correction_of_brightness_and_contrast_images (accessed 03.05.2019).

2. Gonzalez R.K. , Woods R.E. . Digital Image Processing, 3rd Edition, New Jersey: Pearson Education, 2008. 950 pp.

3. Gupta S. , Kaur Y. . Review of Different Local and Global Contrast Enhancement Techniques for a Digital Image // International Journal of Computer Applications [Electronic resource] 2014, URL: https://pdfs.semanticscholar.org/7fb1/bf8775a1a1eaad9b3d1f4 5bc85adc5c3212f.pdf (Date of access: 3.05. 2019).

4. Ma J., Fan X. , Young S. X. , Zang X. , Ztsu Ks. . Contrast Limited Adaptive Histogram Equalization Based Fusion for Underwater Image Enhancement // Preprints [Electronic resource] 2017, URL: https: //www. preprints. org/manuscript/201703.0086/v 1 (Accessed 3 May 2019).

Image Preprocessing- the process of improving image quality, which aims to obtain, on the basis of the original, the most accurate and adapted image for automatic analysis.

Among the defects of a digital image, the following types can be distinguished:

  • digital noise
  • Color defects (insufficient or excessive brightness and contrast, wrong color tone)
  • Blur (out of focus)

Image pre-processing methods depend on research tasks and may include the following types of work:

Filtering Noisy Images

Digital image noise- an image defect introduced by photosensors and the electronics of devices that use them. To suppress it, the following methods are used:

Linear point averaging by neighbors - the simplest kind of noise removal algorithms. Their main idea is to take the arithmetic mean of the points in some neighborhood as the new value of the point.

Physically, such filtering is implemented by traversing the pixels of the image with a convolution matrix that looks like this:

Example:

div is the normalization factor to keep the average intensity unchanged. It is equal to the sum of the coefficients of the matrix, in the example div = 6.

Gaussian blur(a kind of linear convolution) is implemented by traversing the pixels of the image with a convolution matrix that looks like this:

The 5×5 matrix is ​​filled in according to the normal (Gaussian law). Below is the same matrix where the coefficients are already normalized so that the div for this matrix is ​​one.

The strength of the blur depends on the size of the matrix.

The upper left pixel has no “neighbors” on the left and on the top, therefore, we have nothing to multiply the coefficients of the matrix by!

The solution to this problem requires the creation of an intermediate image. The idea is to create a temporary image with dimensions

width + 2 gap / 2, height + 2 gap / 2, where

width and height are the width and height of the filtered image,

gap is the dimension of the convolution matrix.

The input image is copied to the center of the image, and the edges are filled with the extreme pixels of the image. Blurring is applied to the intermediate buffer, and then the result is extracted from it.

median filter is a window filter that sequentially scans the image and returns at each step one of the elements that fell into the filter window.

Pixels that "fall" into the window are sorted in ascending order and the value that is in the middle of the sorted list is selected.

The median filter is typically used to reduce noise or "smooth" the image.

To improve clarity The image uses the following filter (div=1):

Morphological transformations

Morphological filtering is used to expand (dilactate) or narrow (erode) elements of a binary image.

dilatation(morphological expansion) - convolution of an image or a selected area of ​​an image with some pattern. The template can be of any shape and size. At the same time, there is only one leading position(anchor), which is aligned with the current pixel when calculating the convolution.

A binary image is an ordered set (ordered set) of black and white dots (pixels). The maximum intensity of the image pixels is equal to one, and the minimum is equal to zero.

The application of dilatation is reduced to the passage of the template over the entire image and the application of the search operator for the local maximum intensity of the image pixels that are covered by the template. If the maximum is 1, then the point where the anchor of the template is located will be white. Such an operation causes the growth of light areas in the image. In the figure, pixels are marked in gray, which, as a result of applying dilation, will be white.

Erosion(morphological narrowing) - an operation inverse to dilation. The action of erosion is similar to dilation, the only difference is that the local minimum search operator is used. If the minimum is 0, then the point where the anchor of the template is located will be black. In the image on the right, the pixels that are grayed out are those that will become black as a result of erosion.

Operation " dilatation"- an analogue of the logical "or", the operation " Erosion"- an analogue of the logical "and".

The result of morphological operations is largely determined by the applied template (structural element). By choosing a different structural element, you can solve different image processing tasks:

  • Noise suppression.
  • Selection of the boundaries of the object.
  • Selection of the object skeleton.

Image Brightness and Contrast Correction

Brightness is a characteristic that determines how strongly the colors of the pixels differ from black. For example, if a digitized photograph was taken in sunny weather, then its brightness will be significant. On the other hand, if the photo was taken in the evening or at night, then its brightness will be low.

Contrast is a measure of how much the colors of the pixels in an image are spread. The more spread the pixel color values ​​have, the more contrast the image has.

With all element-by-element transformations, the probability distribution law that describes the image changes. With linear contrasting, the form of the probability density is preserved, however, in the general case, i.e. with arbitrary values ​​of the linear transformation parameters, the parameters of the probability density of the transformed image change.

Determining the probabilistic characteristics of images that have undergone nonlinear processing is a direct task of analysis. When solving practical problems of image processing, an inverse problem can be posed: according to the known form of the probability density pf(f) and the desired form p g(g) define the desired transformation g= ϕ( f) to which the original image should be subjected. In the practice of digital image processing, transforming an image to an equiprobable distribution often leads to a useful result. In this case

where g min and g max - minimum and maximum brightness values ​​of the converted image. Let us determine the characteristics of the converter that solves this problem. Let f and g bound by function g(n, m) = j( f(n, m)), a Pf(f) and Pg(g) are the integral distribution laws for the input and output brightness. Taking into account (6.1), we find:

Substituting this expression into the probabilistic equivalence condition

after simple transformations, we obtain the relation

which is a characteristic g(n, m) = j( f(n, m)) in the problem being solved. According to (6.2), the original image undergoes a nonlinear transformation, the characteristic of which is Pf(f) is determined by the integral distribution law of the original image. After that, the result is reduced to the specified dynamic range using the linear contrast operation.

Thus, the probability density transformation assumes knowledge of the integral distribution for the original image. As a rule, there is no reliable information about him. Approximation by analytical functions, due to approximation errors, can lead to a significant difference in the results from the required ones. Therefore, in the practice of image processing, the transformation of distributions is performed in two stages.



At the first stage, the histogram of the original image is measured. For a digital image whose gray scale belongs to the integer range, for example, the histogram is a table of 256 numbers. Each of them shows the number of pixels in the image (frame) that have a given brightness. By dividing all the numbers in this table by the total sample size equal to the number of samples in the image, an estimate of the probability distribution of the brightness of the image is obtained. Denote this estimate q pf(fq), 0 ≤ fq≤ 255. Then the estimate of the integral distribution is obtained by the formula:

At the second stage, the nonlinear transformation itself (6.2) is performed, which provides the necessary properties of the output image. In this case, instead of the unknown true integral distribution, its estimate based on the histogram is used. With this in mind, all methods of element-by-element transformation of images, the purpose of which is to modify the laws of distribution, are called histogram methods. In particular, a transformation where the output image has a uniform distribution is called equalization (alignment) of the histogram.

Note that the histogram transformation procedures can be applied both to the image as a whole and to its individual fragments. The latter can be useful in the processing of non-stationary images, the characteristics of which differ significantly in different areas. In this case, the best effect can be achieved by applying histogram processing to individual areas - areas of interest. True, this will change the values ​​of the readings and all other areas. Figure 6.1 shows an example of equalization performed in accordance with the described methodology.

A characteristic feature of many images obtained in real imaging systems is a significant proportion of dark areas and a relatively small number of areas with high brightness.

Figure 6.1 – An example of image histogram equalization: a) the original image and its histogram c); b) the transformed image and its histogram d)

Equalization of the histogram leads to equalization of the integral areas of uniformly distributed brightness ranges. Comparison of the original (Figure 6.1 a) and processed (Figure 6.1 b) images shows that the redistribution of brightness that occurs during processing leads to an improvement in visual perception.

Loading...Loading...