Authors:
Yu-Ju Lin1 and Herng-Hua Chang2*
Affiliation(s):
1Computational Biomedical Engineering Laboratory (CBEL)
2Department of Engineering Science and Ocean Engineering, National Taiwan University, Daan, Taipei 10617, Taiwan
Dates:
Received: 10 April, 2015; Accepted: 29 April, 2015; Published: 02 May, 2015
*Corresponding author:
Professor Herng-Hua Chang, Ph.D, Department of Engineering Science and Ocean Engineering, National Taiwan University, 1 Sec. 4 Roosevelt Road, Daan, Taipei 10617, Taiwan, Tel: +886-2-3366-5745; Fax: +886-2-2392-9885L; Email: @
Citation:
Lin YJ, Chang HH (2015) Investigation of Significant Features Based on Image Texture Analysis for Automated Denoising in MR Images. Peertechz J Biomed Eng 1(1): 001-005.
Copyright:
© 2015 Hariharan U, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Keywords:
Denoising; Image feature; Image texture; Automation; MRI

Introduction: In magnetic resonance (MR) image analysis, noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing such as tissue classification, segmentation and registration. Consequently, noise removal in MR images is important and essential for a wide variety of subsequent processing applications. In the literature, abundant denoising algorithms have been proposed, most of which require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. However, this will induce another problem of seeking appropriate meaningful attributes among a huge number of image characteristics for the automation process. This paper is in an attempt to systematically investigate significant attributes from image texture features to facilitate subsequent automation processes.

Methods: In our approach, a total number of 60 image texture attributes are considered that are based on three categories: 1) Image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) 2-D discrete wavelet transform (DWT). To obtain the most significant attributes, a paired-samples t-test is applied to each individual image features computed in every image. The evaluation is based on the distinguishing ability between noise levels, intensity distributions, and anatomical geometries.

Results: A wide variety of images were adopted including the BrainWeb image data with various levels of noise and intensity non-uniformity to evaluate the proposed methods. Experimental results indicated that an optimal number of seven image features performed best in distinguishing MR images with various combinations of noise levels and slice positions. They were the contrast and dissimilarity features from the GLCM category and five norm energy and standard deviation features from the 2-D DWT category.

Conclusions: We have introduced a new framework to systematically investigate significant attributes from various image features and textures for the automation process in denoising MR images. Sixty image texture features were computed in every image followed by a paired-samples t-test for the discrimination evaluation. Seven texture features with two from the GLCM category and five from the 2-D DWT category performed best, which can be incorporated into denoising procedures for the automation purpose in the future.

Introduction

Magnetic resonance imaging (MRI) has been one of the most frequently used medical imaging modalities due to its high contrast among different soft tissues, high spatial resolution across the entire field of view, and multi-spectral characteristics [11. Westbrook C, Roth CK (2011) MRI in Practice. Wiley-Blackwell.,22. Hashemi RH, Bradley WG, Lisanti CJ (2012) MRI: the basics. LWW.]. In MR image analysis, noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing such as tissue classification, segmentation and registration [33. Gudbjartsson H, Patz S (1995) The Rician distribution of noisy MRI data. Magnetic Resonance in Medicine 34: 910-914. ,44. Macovski A (1996). Noise in MRI. Magnetic Resonance in Medicine 36: 494-497. ]. Consequently, noise removal in MR images is important and essential for a wide variety of subsequent processing applications.

Over the decades, Gaussian filters have been widely used in many MR image processing applications due to its simplicity [55. He L, Greenshields Ian R (2009) A nonlocal maximum likelihood estimation method for Rician noise reduction in MR images. Medical Imaging, IEEE Transactions on 28: 165-172. ]. Although the Gaussian filter smoothes noise quite efficiently edges are blurred significantly. To preserve the sharpness, a nonlinear method called the anisotropic diffusion filter [66. Perona P, Malik J (1990) Scale-space and edge detection using anisotropic diffusion. Pattern Analysis and Machine Intelligence, IEEE Transactions on 12: 629-639. ] has been proposed. In their approach, pixel values were averaged from neighborhoods, whose size and shape depended on local image variation that was measured at every point. One promising technique that attempted to improve the Gaussian filter was the bilateral filter [77. Tomasi C, Manduchi R, editors (1998)( Bilateral filtering for gray and color images. Computer Vision, 1998. Sixth International Conference on IEEE 839 – 846.]. The essence of this approach is to combine both geometric closeness in the spatial domain and gray value similarity in the range as a nonlinear filter for image denoising. It has been shown that the bilateral filter performed effectively in MR image noise suppression and it has been the object of many further studies [88. Walker SA, Miller D, Tanabe J (2006) Bilateral spatial filtering: Refining methods for localizing brain activation in the presence of parenchymal abnormalities. NeuroImage 33: 564-569. ,99. Anand CS, Sahambi J, editors (2008) MRI denoising using bilateral filter in redundant wavelet domain. TENCON 2008-2008 IEEE Region 10 Conference IEEE 1-6.].

Indeed, most denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and image textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. However, this will induce another problem of seeking appropriate meaningful attributes among a huge number of image characteristics for the automation process. This paper is in an attempt to systematically investigate significant attributes from image texture features to facilitate subsequent automation processes.

Related Works

Gray level co-occurrence matrix (GLCM)

Statistical features of gray levels were one of the earliest methods used to classify textures. The gray level co-occurrence matrix (GLCM) [1010. Haralick RM, Shanmugam K, Dinstein IH (1973) Textural features for image classification. Systems, Man and Cybernetics, IEEE Transactions on 3: 610-621. ] extracts second order statistics based on the repeated occurrence of some gray-level configuration in an image. This configuration varies rapidly with respect to distance in fine texture regions and slowly in coarse texture images. More specifically, the GLCM is defined as a matrix of frequencies at which two pixels, separated by a certain vector, occur in the image. For an image of L gray levels, the distribution in the L × L matrix depends on the angular and distance relationship between pixels based on gray tone spatial dependencies using
M(i,j)= x=1 W x Δx y=1 W y Δy { 1,if W(x,y)=i and W(x+Δx,y,+Δy) 0,otherwise    (1) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaad2eacaGGOaGaamyAaiaacYcacaWGQbGaaiykaiabg2da9maaqadabaWaaabmaeaadaGabaqaauaabeqaceaaaeaacaaIXaGaaiilaiaadMgacaWGMbGaaGzaVlaabccacaWGxbGaaiikaiaadIhacaGGSaGaamyEaiaacMcacqGH9aqpcaWGPbGaaeiiaiaadggacaWGUbGaamizaiaabccacaWGxbGaaiikaiaadIhacqGHRaWkcqqHuoarcaWG4bGaaiilaiaadMhacaGGSaGaey4kaSIaeuiLdqKaamyEaiaacMcaaeaacaaIWaGaaiilaiaad+gacaWG0bGaamiAaiaadwgacaWGYbGaam4DaiaadMgacaWGZbGaamyzaaaaaiaawUhaaaWcbaGaamyEaiabg2da9iaaigdaaeaacaWGxbWaaSbaaWqaaiaadMhaaeqaaSGaeyOeI0IaeuiLdqKaamyEaaqdcqGHris5aaWcbaGaamiEaiabg2da9iaaigdaaeaacaWGxbWaaSbaaWqaaiaadIhaaeqaaSGaeyOeI0IaeuiLdqKaamiEaaqdcqGHris5aOGaaeiiaiaabccacaqGGaGaaeikaiaabgdacaqGPaaaaa@7DB7@

Where M(i,j) is the quantized gray tone at position (i,j) with I,j = 0,1,…, L – 1, Wx and Wy are the dimension of the resolution cells of the image ordered by their row-column designations, W(x,y) is the gray level value in the cell, and Δx and Δy are the spatial relation between two adjacent pixels defined by the angle θ and distance d from the cell origin. For certain distance d, there are eight neighboring pixel-pairs in four independent directions corresponding to θ = 0°, 45°, 90°, and 135°, respectively. This texture-content information is then normalized to obtain the matrix of relative frequencies P(i,j) as
P(i,j)= M(i,j) i=0 W x 1 i=0 W y 1 M(i,j)     (2) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadcfacaGGOaGaamyAaiaacYcacaWGQbGaaiykaiabg2da9maalaaabaGaamytaiaacIcacaWGPbGaaiilaiaadQgacaGGPaaabaWaaabmaeaadaaeWaqaaiaad2eacaGGOaGaamyAaiaacYcacaWGQbGaaiykaaWcbaGaamyAaiabg2da9iaaicdaaeaacaWGxbWaaSbaaWqaaiaadMhaaeqaaSGaeyOeI0IaaGymaaqdcqGHris5aaWcbaGaamyAaiabg2da9iaaicdaaeaacaWGxbWaaSbaaWqaaiaadIhaaeqaaSGaeyOeI0IaaGymaaqdcqGHris5aaaakiaabccacaqGGaGaaeiiaiaabccacaqGOaGaaeOmaiaabMcaaaa@5CEC@

Table 1 summarizes most frequently used textural features and their formulas based on Eq. (2).

  1. avatar

    Table 1:

    Image texture features and equations in the GLCM category.

2-D discrete wavelet transform (2-D DWT)

Wavelets are mathematical functions that decompose an image into a hierarchy of scales ranging from the coarsest resolution to the finest resolution [1111. Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 11: 674-693. ]. With the representation of an image at various scales, wavelet transforms provide a good mechanism for feature extraction. The 2-D discrete wavelet transform (DWT) transforms an image from its spatial domain into the frequency domain. By wavelet transform, we mean the decomposition of an image with a family of real orthonormal bases obtained through translation and dilation of a kernel function. Four subbands, namely LL1 (low-low), LH1 (low-high), HL1 (high-low), and HH1 (high-high), are obtained by the 1st order horizontal and vertical transformations sequentially. To obtain more detail information, the LL1 subband is further decomposed into four 2nd order subbands as illustrated in Figure 1.

  1. avatar
    Figure 1:

    Illustration of the 2-D DWT procedure. (a) Original image; (b) One stage 2-D DWT; (c) Two stage 2-D DWT.

After decomposing the images, the local wavelet coefficients in each subband can be computed based on the following energy equations [1212. Chang T, Kuo C-C (1993) Texture analysis and classification with tree-structured wavelet transform. Image Processing, IEEE Transactions on 2: 429-441. ]:
Norm1 energy: e 1 = 1 MN m=1 M n=1 N |  x(m,n)|    (3) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaab6eacaqGVbGaaeOCaiaab2gacqGHsislcaqGXaGaaeiiaiaabwgacaqGUbGaaeyzaiaabkhacaqGNbGaaeyEaiaacQdacaWGLbWaaSbaaSqaaiaaigdaaeqaaOGaeyypa0ZaaSaaaeaacaaIXaaabaGaamytaiaad6eaaaWaaabCaeaadaaeWbqaaiaacYhaaSqaaiaad6gacqGH9aqpcaaIXaaabaGaamOtaaqdcqGHris5aaWcbaGaamyBaiabg2da9iaaigdaaeaacaWGnbaaniabggHiLdGccaaMb8UaaeiiaiaadIhacaGGOaGaamyBaiaacYcacaWGUbGaaiykaiaacYhacaaMb8UaaeiiaiaabccacaqGGaGaaeiiaiaabIcacaqGZaGaaeykaaaa@64A2@
Norm2 energy: e 2 = 1 MN m=1 M n=1 N |  x(m,n) | 2      (4) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaab6eacaqGVbGaaeOCaiaab2gacqGHsislcaqGYaGaaeiiaiaabwgacaqGUbGaaeyzaiaabkhacaqGNbGaaeyEaiaacQdacaWGLbWaaSbaaSqaaiaaikdaaeqaaOGaeyypa0ZaaSaaaeaacaaIXaaabaGaamytaiaad6eaaaWaaabCaeaadaaeWbqaaiaacYhaaSqaaiaad6gacqGH9aqpcaaIXaaabaGaamOtaaqdcqGHris5aaWcbaGaamyBaiabg2da9iaaigdaaeaacaWGnbaaniabggHiLdGccaaMb8UaaeiiaiaadIhacaGGOaGaamyBaiaacYcacaWGUbGaaiykaiaacYhadaahaaWcbeqaaiaaikdaaaGccaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabIcacaqG0aGaaeykaaaa@64B1@
Standard deviation: e 3 = 1 MN m=1 M n=1 N |  x(m,n) x ¯ | 2     (5) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaabofacaqG0bGaaeyyaiaab6gacaqGKbGaaeyyaiaabkhacaqGKbGaaeiiaiaabsgacaqGLbGaaeODaiaabMgacaqGHbGaaeiDaiaabMgacaqGVbGaaeOBaiaacQdacaWGLbWaaSbaaSqaaiaaiodaaeqaaOGaeyypa0ZaaSaaaeaacaaIXaaabaGaamytaiaad6eaaaWaaabCaeaadaaeWbqaaiaacYhaaSqaaiaad6gacqGH9aqpcaaIXaaabaGaamOtaaqdcqGHris5aaWcbaGaamyBaiabg2da9iaaigdaaeaacaWGnbaaniabggHiLdGccaaMb8UaaeiiaiaadIhacaGGOaGaamyBaiaacYcacaWGUbGaaiykaiabgkHiTiqadIhagaqeaiaacYhadaahaaWcbeqaaiaaikdaaaGccaqGGaGaaeiiaiaabccacaqGGaGaaeikaiaabwdacaqGPaaaaa@6AD3@

where x(m,n) represents the subband under consideration, M and N represent the dimension of the subband with 1 ≤ mM and 1 ≤ nN, and x ¯ MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqadIhagaqeaaaa@394B@ is the arithmetic mean of x(m,n) using
x ¯ = 1 MN m=1 M n=1 N x(m,n)       (6) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiqadIhagaqeaiabg2da9maalaaabaGaaGymaaqaaiaad2eacaWGobaaamaaqahabaWaaabCaeaacaWG4bGaaiikaiaad2gacaGGSaGaamOBaiaacMcaaSqaaiaad6gacqGH9aqpcaaIXaaabaGaamOtaaqdcqGHris5aaWcbaGaamyBaiabg2da9iaaigdaaeaacaWGnbaaniabggHiLdGccaqGGaGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGOaGaaeOnaiaabMcaaaa@5328@

Methods

The proposed methods are depicted in Figure 2, which consists of two phases: feature extraction and feature selection.

  1. avatar
    Figure 2:

    Flowchart of the proposed texture feature evaluation methods.


Feature extraction

In our approach, three different categories are used to extract image features as illustrated in Figure 3:
− Image statistics: We compute the mean intensity (Mean), standard deviation (SD), variance (VAR), and entropy (ENT) of the input gray-level MR image.
− GLCM: We first compute the difference image In, which is the difference between the input image I and its Gaussian filtered image ID using
I n =I I D      (7) MathType@MTEF@5@5@+=feaaguart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadMeadaWgaaWcbaGaamOBaaqabaGccqGH9aqpcaWGjbGaeyOeI0IaamysamaaBaaaleaacaWGebaabeaakiaabccacaqGGaGaaeiiaiaabccacaqGGaGaaeikaiaabEdacaqGPaaaaa@43FB@

Subsequently, we compute the GLCM texture features using In as the input image in Eq. (1) with d = 1.

− 2-D DWT: In this category, we first compute the normalized image I' using
I ' (i,j)= I(i,j) ( 1 MN k=1 M l=1 N I(k,l ) 2 ) 1/2      (8) MathType@MTEF@5@5@+=feaaguart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLnhiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr4rNCHbGeaGqkY=xjYJH8sqFD0xXdHaVhbbf9v8qqaqFr0xc9pk0xbba9q8WqFfeaY=biLkVcLq=JHqpepeea0=as0Fb9pgeaYRXxe9vr0=vr0=vqpWqaaeaabiGaciaacaqabeaadaqaaqaaaOqaaiaadMeadaahaaWcbeqaaiaacEcaaaGccaGGOaGaamyAaiaacYcacaWGQbGaaiykaiabg2da9maalaaabaGaamysaiaacIcacaWGPbGaaiilaiaadQgacaGGPaaabaGaaiikamaalaaabaGaaGymaaqaaiaad2eacaWGobaaamaaqadabaWaaabmaeaacaWGjbGaaiikaiaadUgacaGGSaGaamiBaaWcbaGaamiBaiabg2da9iaaigdaaeaacaWGobaaniabggHiLdaaleaacaWGRbGaeyypa0JaaGymaaqaaiaad2eaa0GaeyyeIuoakiaacMcadaahaaWcbeqaaiaaikdaaaGccaGGPaWaaWbaaSqabeaacaaIXaGaai4laiaaikdaaaaaaOGaaeiiaiaabccacaqGGaGaaeiiaiaabccacaqGOaGaaeioaiaabMcaaaa@5FC3@

  1. avatar
    Figure 3:

    Categories of the evaluated image texture features: (a) image statistics, (b) GLCM, and (c) 2-D DWT.

As illustrated in Figure 3c, we perform the one and two stages of I' for the wavelet features using the Haar wavelet transform [1313. Haar A (1910) Zur theorie der orthogonalen funktionensysteme. Mathematische Annalen 69: 331-371. ] for its simplicity and effectiveness. Finally, we compute the wavelet energy coefficients based on Eqs. (3)-(5).

Feature selection

In summary, there are 60 different image texture features that are obtained based on the three feature extraction methods in every image. To obtain the most significant attributes, a paired-samples t-test [1414. Student (1908) The probable error of a mean. Biometrika 1-25. ,1515. Fisher RA, Genetiker S, Genetician S, Britain G, Généticien S (1970) Statistical methods for research workers. Oliver and Boyd Edinburgh] is then applied to each individual image features to evaluate the ability of discrimination in two categories: noise level and slice position. The evaluation is based on the distinguishing ability between noise levels, intensity distributions, and anatomical geometries according to the average p-value.

Experimental Results and Discussion

A wide variety of images were adopted for the evaluation of the proposed methods including the famous BrainWeb [1616. BrainWeb: Simulated Brain Database. http://www.bic.mni.mcgill.ca/brainweb/.] image data of T1-weighted MR image volumes with various levels of noise and intensity non-uniformity. In this database, the images are classified into five different noise levels: 1%, 3%, 5%, 7%, 9% and three different intensity non-uniformities: 0%, 20%, 40%. Sixty texture features in three categories as described in Sections 2 and 3 were computed in each individual T1-weighted 1mm MR image for the evaluation.

Tables 2 and 3 present the order of significance based on the average p-value of each individual feature using the t-test in noise level and slice position, respectively. While the majority of the features with p < 0.05 are from the GLCM category in Table 2, the majority of significant features in Table 3 are contributed by the wavelet category. More specifically, the features of contrast (CON), dissimilarity (DIS), standard deviation (SD), angular second moment (ASM), and homogeneity (HOM) in the GLCM class were more sensitive to the changes in noise level. On the other hand, the mean intensity (Mean) and the norm energy (e1 and e2) and standard deviation (e3) features in the 2-D DWT category performed best in distinguishing slice positions.

  1. avatar

    Table 2:

    T-test results based on the p-value: noise level.

  1. avatar

    Table 3:

    T-test results based on the p-value: slice position.

To further understand the ability of these features in classifying noise level and slice position in new datasets, we used T1-weighted but 5mm MR image volumes from the BrainWeb. The images were divided into 90 different combinations that resulted from 18 structural similarities multiplied by five noise levels. To obtain the optimal number of features for the best discrimination, the image features were then inserted into the classification tree of the classification and regression tree (CART) algorithm [1717. Breiman L, Friedman J, Olshen R, Stone C (1984) Classification and regression trees. Monterey, Calif., USA: Wadsworth. Inc] for further evaluation. Five nodes corresponding to five different noise levels were adopted in the CART process. Table 4 presents the testing results with respect to different numbers of the image features from Tables 2 and 3 by randomly selecting 100 images. Herein, only the cases with the numbers of 2, 7, and 21 were presented for simplicity. The feature number 2 corresponded to the features with p < 0.02, 7 with p < 0.03, and 21 with p < 0.035. It is evident that seven features achieved the best accuracy with 91 corrects in total. They are respectively CON (90°) and DIS (90°) from the GLCM category and e3 (LH1), e1(LL2), e3(HL2), e1(LL1), and e2(HL1) from the 2-D wavelet category.

  1. avatar

    Table 4:

    Discrimination results using different numbers of texture features.

Conclusion

We have introduced a new framework to systematically investigate significant attributes from various image features and textures for the automation process in denoising MR images. A total number of 60 image attributes were considered that are based on three categories: image statistics, GLCM, and 2-D DWT. A paired-samples t-test was applied to each individual image features computed in every image to evaluate the discrimination ability. A wide variety of T1-weighted MR images from the BrainWeb dataset were used to evaluate and test the image features. Experimental results indicated that an optimal number of seven image features performed best in distinguishing MR images with various combinations of noise levels and slice positions. These features of CON (90°), DIS(90°), e3(LH1), e1(LL2), e3(HL2), e1(LL1), and e2(HL1) can be incorporated into denoising procedures for parameter-free automation purpose in the future.

Acknowledgement

This work was supported in part by the National Science Council under Research Grant No. NSC100-2320-B-002-073-MY3 and National Taiwan University under Grant No. NTU-CDP-103R7889.

  1. Westbrook C, Roth CK (2011) MRI in Practice. Wiley-Blackwell
  2. Hashemi RH, Bradley WG, Lisanti CJ (2012) MRI: the basics. LWW
  3. Gudbjartsson H, Patz S (1995) The Rician distribution of noisy MRI data. Magnetic Resonance in Medicine 34: 910-914 .
  4. Macovski A (1996). Noise in MRI. Magnetic Resonance in Medicine 36: 494-497.
  5. He L, Greenshields Ian R (2009) A nonlocal maximum likelihood estimation method for Rician noise reduction in MR images. Medical Imaging, IEEE Transactions on 28: 165-172 .
  6. Perona P, Malik J (1990) Scale-space and edge detection using anisotropic diffusion. Pattern Analysis and Machine Intelligence, IEEE Transactions on 12: 629-639 .
  7. Tomasi C, Manduchi R, editors (1998)( Bilateral filtering for gray and color images. Computer Vision, 1998. Sixth International Conference on IEEE 839 – 846.
  8. Walker SA, Miller D, Tanabe J (2006) Bilateral spatial filtering: Refining methods for localizing brain activation in the presence of parenchymal abnormalities. NeuroImage 33: 564-569.
  9. Anand CS, Sahambi J, editors (2008) MRI denoising using bilateral filter in redundant wavelet domain. TENCON 2008-2008 IEEE Region 10 Conference IEEE 1-6.
  10. Haralick RM, Shanmugam K, Dinstein IH (1973) Textural features for image classification. Systems, Man and Cybernetics, IEEE Transactions on 3: 610-621 .
  11. Mallat SG (1989) A theory for multiresolution signal decomposition: the wavelet representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 11: 674-693 .
  12. Chang T, Kuo C-C (1993) Texture analysis and classification with tree-structured wavelet transform. Image Processing, IEEE Transactions on 2: 429-441 .
  13. Haar A (1910) Zur theorie der orthogonalen funktionensysteme. Mathematische Annalen 69: 331-371 .
  14. Student (1908) The probable error of a mean. Biometrika 1-25 .
  15. Fisher RA, Genetiker S, Genetician S, Britain G, Généticien S (1970) Statistical methods for research workers. Oliver and Boyd Edinburgh
  16. BrainWeb: Simulated Brain Database. http://www.bic.mni.mcgill.ca/brainweb/.
  17. Breiman L, Friedman J, Olshen R, Stone C (1984) Classification and regression trees. Monterey, Calif., USA: Wadsworth. Inc

Follow us on Academia.edu