Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (69)

Search Parameters:
Keywords = medical image denoising

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 38700 KiB  
Article
Transformative Noise Reduction: Leveraging a Transformer-Based Deep Network for Medical Image Denoising
by Rizwan Ali Naqvi, Amir Haider, Hak Seob Kim, Daesik Jeong and Seung-Won Lee
Mathematics 2024, 12(15), 2313; https://doi.org/10.3390/math12152313 - 24 Jul 2024
Viewed by 269
Abstract
Medical image denoising has numerous real-world applications. Despite their widespread use, existing medical image denoising methods fail to address complex noise patterns and typically generate artifacts in numerous cases. This paper proposes a novel medical image denoising method that learns denoising using an [...] Read more.
Medical image denoising has numerous real-world applications. Despite their widespread use, existing medical image denoising methods fail to address complex noise patterns and typically generate artifacts in numerous cases. This paper proposes a novel medical image denoising method that learns denoising using an end-to-end learning strategy. Furthermore, the proposed model introduces a novel deep–wider residual block to capture long-distance pixel dependencies for medical image denoising. Additionally, this study proposes leveraging multi-head attention-guided image reconstruction to effectively denoise medical images. Experimental results illustrate that the proposed method outperforms existing qualitative and quantitative evaluation methods for numerous medical image modalities. The proposed method can outperform state-of-the-art models for various medical image modalities. It illustrates a significant performance gain over its counterparts, with a cumulative PSNR score of 8.79 dB. The proposed method can also denoise noisy real-world medical images and improve clinical application performance such as abnormality detection. Full article
Show Figures

Figure 1

19 pages, 13192 KiB  
Article
Optimization of Fast Non-Local Means Noise Reduction Algorithm Parameter in Computed Tomographic Phantom Images Using 3D Printing Technology
by Hajin Kim, Sewon Lim, Minji Park, Kyuseok Kim, Seong-Hyeon Kang and Youngjin Lee
Diagnostics 2024, 14(15), 1589; https://doi.org/10.3390/diagnostics14151589 - 23 Jul 2024
Viewed by 247
Abstract
Noise in computed tomography (CT) is inevitably generated, which lowers the accuracy of disease diagnosis. The non-local means approach, a software technique for reducing noise, is widely used in medical imaging. In this study, we propose a noise reduction algorithm based on fast [...] Read more.
Noise in computed tomography (CT) is inevitably generated, which lowers the accuracy of disease diagnosis. The non-local means approach, a software technique for reducing noise, is widely used in medical imaging. In this study, we propose a noise reduction algorithm based on fast non-local means (FNLMs) and apply it to CT images of a phantom created using 3D printing technology. The self-produced phantom was manufactured using filaments with similar density to human brain tissues. To quantitatively evaluate image quality, the contrast-to-noise ratio (CNR), coefficient of variation (COV), and normalized noise power spectrum (NNPS) were calculated. The results demonstrate that the optimized smoothing factors of FNLMs are 0.08, 0.16, 0.22, 0.25, and 0.32 at 0.001, 0.005, 0.01, 0.05, and 0.1 of noise intensities, respectively. In addition, we compared the optimized FNLMs with noisy, local filters and total variation algorithms. As a result, FNLMs showed superior performance compared to various denoising techniques. Particularly, comparing the optimized FNLMs to the noisy images, the CNR improved by 6.53 to 16.34 times, COV improved by 6.55 to 18.28 times, and the NNPS improved by 10−2 mm2 on average. In conclusion, our approach shows significant potential in enhancing CT image quality with anthropomorphic phantoms, thus addressing the noise issue and improving diagnostic accuracy. Full article
Show Figures

Figure 1

21 pages, 4162 KiB  
Article
Enhancing Medical Image Denoising: A Hybrid Approach Incorporating Adaptive Kalman Filter and Non-Local Means with Latin Square Optimization
by Mehdi Taassori and Béla Vizvári
Electronics 2024, 13(13), 2640; https://doi.org/10.3390/electronics13132640 - 5 Jul 2024
Viewed by 423
Abstract
Medical image denoising plays a critical role in enhancing the quality of diagnostic imaging, where noise reduction without compromising image details is paramount. In this paper, we propose a novel hybrid approach aimed at improving the denoising efficacy for medical images. Initially, we [...] Read more.
Medical image denoising plays a critical role in enhancing the quality of diagnostic imaging, where noise reduction without compromising image details is paramount. In this paper, we propose a novel hybrid approach aimed at improving the denoising efficacy for medical images. Initially, we employ an adaptive Kalman filter to attenuate noise, leveraging its proficiency in state estimation from noisy measurements. Unlike conventional Kalman filters with fixed parameters, our adaptive Kalman filter dynamically adjusts its parameters based on the noise characteristics of the input image, thus offering enhanced accuracy in estimating the underlying true state of the system represented by the medical image. Subsequently, both a non-local means (NLM) method and a median filter are introduced as post-processing steps to further refine the denoised image. The NLM method leverages the similarities between image patches to effectively reduce noise, while the median filter further enhances the denoised image by suppressing residual noise and preserving image details. However, the effectiveness of NLM and the median filter is highly dependent on carefully chosen parameters, which traditionally necessitates extensive computational resources for optimization. To address this challenge, we introduce the innovative use of Latin square optimization, a structured experimental design technique, to efficiently determine optimal parameters for NLM. By systematically exploring parameter combinations using Latin square optimization, we mitigate the complexity of experiments while enhancing denoising performance. The experimental results on medical images demonstrate the effectiveness of our proposed approach, showcasing significant improvements in noise reduction and the preservation of image features compared to conventional methods. Our hybrid approach not only advances the state-of-the-art in medical image denoising but also presents a practical solution for optimizing parameter selection in NLM, thereby facilitating their broader adoption in medical imaging applications. Full article
Show Figures

Figure 1

18 pages, 15772 KiB  
Article
Physics-Based Practical Speckle Noise Modeling for Optical Coherence Tomography Image Denoising
by Lei Yang, Di Wu, Wenteng Gao, Ronald X. Xu and Mingzhai Sun
Photonics 2024, 11(6), 569; https://doi.org/10.3390/photonics11060569 - 17 Jun 2024
Viewed by 549
Abstract
Optical coherence tomography (OCT) has been extensively utilized in the field of biomedical imaging due to its non-invasive nature and its ability to provide high-resolution, in-depth imaging of biological tissues. However, the use of low-coherence light can lead to unintended interference phenomena within [...] Read more.
Optical coherence tomography (OCT) has been extensively utilized in the field of biomedical imaging due to its non-invasive nature and its ability to provide high-resolution, in-depth imaging of biological tissues. However, the use of low-coherence light can lead to unintended interference phenomena within the sample, which inevitably introduces speckle noise into the imaging results. This type of noise often obscures key features in the image, thereby reducing the accuracy of medical diagnoses. Existing denoising algorithms, while removing noise, tend to also damage the structural details of the image, affecting the quality of diagnosis. To overcome this challenge, we have proposed a speckle noise (PSN) framework. The core of this framework is an innovative dual-module noise generator that can decompose the noise in OCT images into speckle noise and equipment noise, addressing each type independently. By integrating the physical properties of noise into the design of the noise generator and training it with unpaired data, we are able to synthesize realistic noise images that match clear images. These synthesized paired images are then used to train a denoiser to effectively denoise real OCT images. Our method has demonstrated its superiority in both private and public datasets, particularly in maintaining the integrity of the image structure. This study emphasizes the importance of considering the physical information of noise in denoising tasks, providing a new perspective and solution for enhancing OCT image denoising technology. Full article
(This article belongs to the Section Biophotonics and Biomedical Optics)
Show Figures

Figure 1

16 pages, 6240 KiB  
Article
Enabling Low-Dose In Vivo Benchtop X-ray Fluorescence Computed Tomography through Deep-Learning-Based Denoising
by Naghmeh Mahmoodian, Mohammad Rezapourian, Asim Abdulsamad Inamdar, Kunal Kumar, Melanie Fachet and Christoph Hoeschen
J. Imaging 2024, 10(6), 127; https://doi.org/10.3390/jimaging10060127 - 22 May 2024
Viewed by 825
Abstract
X-ray Fluorescence Computed Tomography (XFCT) is an emerging non-invasive imaging technique providing high-resolution molecular-level data. However, increased sensitivity with current benchtop X-ray sources comes at the cost of high radiation exposure. Artificial Intelligence (AI), particularly deep learning (DL), has revolutionized medical imaging by [...] Read more.
X-ray Fluorescence Computed Tomography (XFCT) is an emerging non-invasive imaging technique providing high-resolution molecular-level data. However, increased sensitivity with current benchtop X-ray sources comes at the cost of high radiation exposure. Artificial Intelligence (AI), particularly deep learning (DL), has revolutionized medical imaging by delivering high-quality images in the presence of noise. In XFCT, traditional methods rely on complex algorithms for background noise reduction, but AI holds promise in addressing high-dose concerns. We present an optimized Swin-Conv-UNet (SCUNet) model for background noise reduction in X-ray fluorescence (XRF) images at low tracer concentrations. Our method’s effectiveness is evaluated against higher-dose images, while various denoising techniques exist for X-ray and computed tomography (CT) techniques, only a few address XFCT. The DL model is trained and assessed using augmented data, focusing on background noise reduction. Image quality is measured using peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), comparing outcomes with 100% X-ray-dose images. Results demonstrate that the proposed algorithm yields high-quality images from low-dose inputs, with maximum PSNR of 39.05 and SSIM of 0.86. The model outperforms block-matching and 3D filtering (BM3D), block-matching and 4D filtering (BM4D), non-local means (NLM), denoising convolutional neural network (DnCNN), and SCUNet in both visual inspection and quantitative analysis, particularly in high-noise scenarios. This indicates the potential of AI, specifically the SCUNet model, in significantly improving XFCT imaging by mitigating the trade-off between sensitivity and radiation exposure. Full article
(This article belongs to the Special Issue Recent Advances in X-ray Imaging)
Show Figures

Figure 1

14 pages, 1060 KiB  
Article
Practical Medical Image Generation with Provable Privacy Protection Based on Denoising Diffusion Probabilistic Models for High-Resolution Volumetric Images
by Hisaichi Shibata, Shouhei Hanaoka, Takahiro Nakao, Tomohiro Kikuchi, Yuta Nakamura, Yukihiro Nomura, Takeharu Yoshikawa and Osamu Abe
Appl. Sci. 2024, 14(8), 3489; https://doi.org/10.3390/app14083489 - 20 Apr 2024
Viewed by 799
Abstract
Local differential privacy algorithms combined with deep generative models can enhance secure medical image sharing among researchers in the public domain without central administrators; however, these images were limited to the generation of low-resolution images, which are very insufficient for diagnosis by medical [...] Read more.
Local differential privacy algorithms combined with deep generative models can enhance secure medical image sharing among researchers in the public domain without central administrators; however, these images were limited to the generation of low-resolution images, which are very insufficient for diagnosis by medical doctors. To enhance the performance of deep generative models so that they can generate high-resolution medical images, we propose a large-scale diffusion model that can, for the first time, unconditionally generate high-resolution (256×256×256) volumetric medical images (head magnetic resonance images). This diffusion model has 19 billion parameters, but to make it easy to train it, we temporally divided the model into 200 submodels, each of which has 95 million parameters. Moreover, on the basis of this new diffusion model, we propose another formulation of image anonymization with which the processed images can satisfy provable Gaussian local differential privacy and with which we can generate images semantically different from the original image but belonging to the same class. We believe that the formulation of this new diffusion model and the implementation of local differential privacy algorithms combined with the diffusion models can contribute to the secure sharing of practical images upstream of data processing. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

16 pages, 2832 KiB  
Article
Impact of Deep Learning Denoising Algorithm on Diffusion Tensor Imaging of the Growth Plate on Different Spatial Resolutions
by Laura Santos, Hao-Yun Hsu, Ronald R. Nelson, Brendan Sullivan, Jaemin Shin, Maggie Fung, Marc R. Lebel, Sachin Jambawalikar and Diego Jaramillo
Tomography 2024, 10(4), 504-519; https://doi.org/10.3390/tomography10040039 - 2 Apr 2024
Viewed by 846
Abstract
To assess the impact of a deep learning (DL) denoising reconstruction algorithm applied to identical patient scans acquired with two different voxel dimensions, representing distinct spatial resolutions, this IRB-approved prospective study was conducted at a tertiary pediatric center in compliance with the Health [...] Read more.
To assess the impact of a deep learning (DL) denoising reconstruction algorithm applied to identical patient scans acquired with two different voxel dimensions, representing distinct spatial resolutions, this IRB-approved prospective study was conducted at a tertiary pediatric center in compliance with the Health Insurance Portability and Accountability Act. A General Electric Signa Premier unit (GE Medical Systems, Milwaukee, WI) was employed to acquire two DTI (diffusion tensor imaging) sequences of the left knee on each child at 3T: an in-plane 2.0 × 2.0 mm2 with section thickness of 3.0 mm and a 2 mm3 isovolumetric voxel; neither had an intersection gap. For image acquisition, a multi-band DTI with a fat-suppressed single-shot spin-echo echo-planar sequence (20 non-collinear directions; b-values of 0 and 600 s/mm2) was utilized. The MR vendor-provided a commercially available DL model which was applied with 75% noise reduction settings to the same subject DTI sequences at different spatial resolutions. We compared DTI tract metrics from both DL-reconstructed scans and non-denoised scans for the femur and tibia at each spatial resolution. Differences were evaluated using Wilcoxon-signed ranked test and Bland–Altman plots. When comparing DL versus non-denoised diffusion metrics in femur and tibia using the 2 mm × 2 mm × 3 mm voxel dimension, there were no significant differences between tract count (p = 0.1, p = 0.14) tract volume (p = 0.1, p = 0.29) or tibial tract length (p = 0.16); femur tract length exhibited a significant difference (p < 0.01). All diffusion metrics (tract count, volume, length, and fractional anisotropy (FA)) derived from the DL-reconstructed scans, were significantly different from the non-denoised scan DTI metrics in both the femur and tibial physes using the 2 mm3 voxel size (p < 0.001). DL reconstruction resulted in a significant decrease in femorotibial FA for both voxel dimensions (p < 0.01). Leveraging denoising algorithms could address the drawbacks of lower signal-to-noise ratios (SNRs) associated with smaller voxel volumes and capitalize on their better spatial resolutions, allowing for more accurate quantification of diffusion metrics. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

15 pages, 13016 KiB  
Article
Image Processing Techniques for Improving Quality of 3D Profile in Digital Holographic Microscopy Using Deep Learning Algorithm
by Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Sensors 2024, 24(6), 1950; https://doi.org/10.3390/s24061950 - 19 Mar 2024
Viewed by 866
Abstract
Digital Holographic Microscopy (DHM) is a 3D imaging technology widely applied in biology, microelectronics, and medical research. However, the noise generated during the 3D imaging process can affect the accuracy of medical diagnoses. To solve this problem, we proposed several frequency domain filtering [...] Read more.
Digital Holographic Microscopy (DHM) is a 3D imaging technology widely applied in biology, microelectronics, and medical research. However, the noise generated during the 3D imaging process can affect the accuracy of medical diagnoses. To solve this problem, we proposed several frequency domain filtering algorithms. However, the filtering algorithms we proposed have a limitation in that they can only be applied when the distance between the direct current (DC) spectrum and sidebands are sufficiently far. To address these limitations, among the proposed filtering algorithms, the HiVA algorithm and deep learning algorithm, which effectively filter by distinguishing between noise and detailed information of the object, are used to enable filtering regardless of the distance between the DC spectrum and sidebands. In this paper, a combination of deep learning technology and traditional image processing methods is proposed, aiming to reduce noise in 3D profile imaging using the Improved Denoising Diffusion Probabilistic Models (IDDPM) algorithm. Full article
(This article belongs to the Special Issue Digital Holography Imaging Techniques and Applications Using Sensors)
Show Figures

Figure 1

21 pages, 12763 KiB  
Article
Research and Implementation of Denoising Algorithm for Brain MRIs via Morphological Component Analysis and Adaptive Threshold Estimation
by Buhailiqiemu Awudong, Paerhati Yakupu, Jingwen Yan and Qi Li
Mathematics 2024, 12(5), 748; https://doi.org/10.3390/math12050748 - 1 Mar 2024
Viewed by 954
Abstract
The inevitable noise generated in the acquisition and transmission process of MRIs seriously affects the reliability and accuracy of medical research and diagnosis. The denoising effect for Rician noise, whose distribution is related to MR image signal, is not good enough. Furthermore, the [...] Read more.
The inevitable noise generated in the acquisition and transmission process of MRIs seriously affects the reliability and accuracy of medical research and diagnosis. The denoising effect for Rician noise, whose distribution is related to MR image signal, is not good enough. Furthermore, the brain has a complex texture structure and a small density difference between different parts, which leads to higher quality requirements for brain MR images. To upgrade the reliability and accuracy of brain MRIs application and analysis, we designed a new and dedicated denoising algorithm (named VST–MCAATE), based on their inherent characteristics. Comparative experiments were performed on the same simulated and real brain MR datasets. The peak signal-to-noise ratio (PSNR), and mean structural similarity index measure (MSSIM) were used as objective image quality evaluation. The one-way ANOVA was used to compare the effects of denoising between different approaches. p < 0.01 was considered statistically significant. The experimental results show that the PSNR and MSSIM values of VST–MCAATE are significantly higher than state-of-the-art methods (p < 0.01), and also that residual images have no anatomical structure. The proposed denoising method has advantages in improving the quality of brain MRIs, while effectively removing the noise with a wide range of unknown noise levels without damaging texture details, and has potential clinical promise. Full article
Show Figures

Figure 1

24 pages, 69336 KiB  
Article
A Non-Convex Fractional-Order Differential Equation for Medical Image Restoration
by Chenwei Li and Donghong Zhao
Symmetry 2024, 16(3), 258; https://doi.org/10.3390/sym16030258 - 20 Feb 2024
Viewed by 934
Abstract
We propose a new non-convex fractional-order Weber multiplicative denoising variational generalized function, which leads to a new fractional-order differential equation, and prove the existence of a unique solution to this equation. Furthermore, the model is solved using the partial differential equation (PDE) method [...] Read more.
We propose a new non-convex fractional-order Weber multiplicative denoising variational generalized function, which leads to a new fractional-order differential equation, and prove the existence of a unique solution to this equation. Furthermore, the model is solved using the partial differential equation (PDE) method and the alternating direction multiplier method (ADMM) to verify the theoretical results. The proposed model is tested on some symmetric and asymmetric medical computerized tomography (CT) images, and the experimental results show that the combination of the fractional-order differential equation and the Weber function has better performance in medical image restoration than the traditional model. Full article
(This article belongs to the Special Issue Image Processing and Symmetry: Topics and Applications)
Show Figures

Figure 1

19 pages, 4254 KiB  
Article
Enhancing Knee MR Image Clarity through Image Domain Super-Resolution Reconstruction
by Vishal Patel, Alan Wang, Andrew Paul Monk and Marco Tien-Yueh Schneider
Bioengineering 2024, 11(2), 186; https://doi.org/10.3390/bioengineering11020186 - 15 Feb 2024
Viewed by 1303
Abstract
This study introduces a hybrid analytical super-resolution (SR) pipeline aimed at enhancing the resolution of medical magnetic resonance imaging (MRI) scans. The primary objective is to overcome the limitations of clinical MRI resolution without the need for additional expensive hardware. The proposed pipeline [...] Read more.
This study introduces a hybrid analytical super-resolution (SR) pipeline aimed at enhancing the resolution of medical magnetic resonance imaging (MRI) scans. The primary objective is to overcome the limitations of clinical MRI resolution without the need for additional expensive hardware. The proposed pipeline involves three key steps: pre-processing to re-slice and register the image stacks; SR reconstruction to combine information from three orthogonal image stacks to generate a high-resolution image stack; and post-processing using an artefact reduction convolutional neural network (ARCNN) to reduce the block artefacts introduced during SR reconstruction. The workflow was validated on a dataset of six knee MRIs obtained at high resolution using various sequences. Quantitative analysis of the method revealed promising results, showing an average mean error of 1.40 ± 2.22% in voxel intensities between the SR denoised images and the original high-resolution images. Qualitatively, the method improved out-of-plane resolution while preserving in-plane image quality. The hybrid SR pipeline also displayed robustness across different MRI sequences, demonstrating potential for clinical application in orthopaedics and beyond. Although computationally intensive, this method offers a viable alternative to costly hardware upgrades and holds promise for improving diagnostic accuracy and generating more anatomically accurate models of the human body. Full article
(This article belongs to the Special Issue Recent Progress in Biomedical Image Processing)
Show Figures

Figure 1

26 pages, 1424 KiB  
Review
Biomedical Image Segmentation Using Denoising Diffusion Probabilistic Models: A Comprehensive Review and Analysis
by Zengxin Liu, Caiwen Ma, Wenji She and Meilin Xie
Appl. Sci. 2024, 14(2), 632; https://doi.org/10.3390/app14020632 - 11 Jan 2024
Viewed by 2840
Abstract
Biomedical image segmentation plays a pivotal role in medical imaging, facilitating precise identification and delineation of anatomical structures and abnormalities. This review explores the application of the Denoising Diffusion Probabilistic Model (DDPM) in the realm of biomedical image segmentation. DDPM, a probabilistic generative [...] Read more.
Biomedical image segmentation plays a pivotal role in medical imaging, facilitating precise identification and delineation of anatomical structures and abnormalities. This review explores the application of the Denoising Diffusion Probabilistic Model (DDPM) in the realm of biomedical image segmentation. DDPM, a probabilistic generative model, has demonstrated promise in capturing complex data distributions and reducing noise in various domains. In this context, the review provides an in-depth examination of the present status, obstacles, and future prospects in the application of biomedical image segmentation techniques. It addresses challenges associated with the uncertainty and variability in imaging data analyzing commonalities based on probabilistic methods. The paper concludes with insights into the potential impact of DDPM on advancing medical imaging techniques and fostering reliable segmentation results in clinical applications. This comprehensive review aims to provide researchers, practitioners, and healthcare professionals with a nuanced understanding of the current state, challenges, and future prospects of utilizing DDPM in the context of biomedical image segmentation. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

16 pages, 37035 KiB  
Article
A Method for Visualization of Images by Photon-Counting Imaging Only Object Locations under Photon-Starved Conditions
by Jin-Ung Ha, Hyun-Woo Kim, Myungjin Cho and Min-Chul Lee
Electronics 2024, 13(1), 38; https://doi.org/10.3390/electronics13010038 - 20 Dec 2023
Viewed by 744
Abstract
Recently, many researchers have been studying the visualization of images and the recognition of objects by estimating photons under photon-starved conditions. Conventional photon-counting imaging techniques estimate photons by way of a statistical method using Poisson distribution in all image areas. However, Poisson distribution [...] Read more.
Recently, many researchers have been studying the visualization of images and the recognition of objects by estimating photons under photon-starved conditions. Conventional photon-counting imaging techniques estimate photons by way of a statistical method using Poisson distribution in all image areas. However, Poisson distribution is temporally and spatially independent, and the reconstructed image has a random noise in the background. Random noise in the background may degrade the quality of the image and make it difficult to accurately recognize objects. Therefore, in this paper, we apply photon-counting imaging technology only to the area where the object is located to eliminate the noise in the background. As a result, it can be seen that the image quality using the proposed method is better than that of the conventional method and the object recognition rate is also higher. Optical experiments were conducted to prove the denoising performance of the proposed method. In addition, we used the structure similarity index measure (SSIM) as a performance metric. To check the recognition rate of the object, we applied the YOLOv5 model. Finally, the proposed method is expected to accelerate the development of astrophotography and medical imaging technologies. Full article
Show Figures

Figure 1

18 pages, 2924 KiB  
Article
Enhancing Medical Image Denoising with Innovative Teacher–Student Model-Based Approaches for Precision Diagnostics
by Shakhnoza Muksimova, Sabina Umirzakova, Sevara Mardieva and Young-Im Cho
Sensors 2023, 23(23), 9502; https://doi.org/10.3390/s23239502 - 29 Nov 2023
Cited by 3 | Viewed by 1412
Abstract
The realm of medical imaging is a critical frontier in precision diagnostics, where the clarity of the image is paramount. Despite advancements in imaging technology, noise remains a pervasive challenge that can obscure crucial details and impede accurate diagnoses. Addressing this, we introduce [...] Read more.
The realm of medical imaging is a critical frontier in precision diagnostics, where the clarity of the image is paramount. Despite advancements in imaging technology, noise remains a pervasive challenge that can obscure crucial details and impede accurate diagnoses. Addressing this, we introduce a novel teacher–student network model that leverages the potency of our bespoke NoiseContextNet Block to discern and mitigate noise with unprecedented precision. This innovation is coupled with an iterative pruning technique aimed at refining the model for heightened computational efficiency without compromising the fidelity of denoising. We substantiate the superiority and effectiveness of our approach through a comprehensive suite of experiments, showcasing significant qualitative enhancements across a multitude of medical imaging modalities. The visual results from a vast array of tests firmly establish our method’s dominance in producing clearer, more reliable images for diagnostic purposes, thereby setting a new benchmark in medical image denoising. Full article
Show Figures

Figure 1

25 pages, 3633 KiB  
Article
BlobCUT: A Contrastive Learning Method to Support Small Blob Detection in Medical Imaging
by Teng Li, Yanzhe Xu, Teresa Wu, Jennifer R. Charlton, Kevin M. Bennett and Firas Al-Hindawi
Bioengineering 2023, 10(12), 1372; https://doi.org/10.3390/bioengineering10121372 - 29 Nov 2023
Cited by 1 | Viewed by 1191
Abstract
Medical imaging-based biomarkers derived from small objects (e.g., cell nuclei) play a crucial role in medical applications. However, detecting and segmenting small objects (a.k.a. blobs) remains a challenging task. In this research, we propose a novel 3D small blob detector called BlobCUT. BlobCUT [...] Read more.
Medical imaging-based biomarkers derived from small objects (e.g., cell nuclei) play a crucial role in medical applications. However, detecting and segmenting small objects (a.k.a. blobs) remains a challenging task. In this research, we propose a novel 3D small blob detector called BlobCUT. BlobCUT is an unpaired image-to-image (I2I) translation model that falls under the Contrastive Unpaired Translation paradigm. It employs a blob synthesis module to generate synthetic 3D blobs with corresponding masks. This is incorporated into the iterative model training as the ground truth. The I2I translation process is designed with two constraints: (1) a convexity consistency constraint that relies on Hessian analysis to preserve the geometric properties and (2) an intensity distribution consistency constraint based on Kullback-Leibler divergence to preserve the intensity distribution of blobs. BlobCUT learns the inherent noise distribution from the target noisy blob images and performs image translation from the noisy domain to the clean domain, effectively functioning as a denoising process to support blob identification. To validate the performance of BlobCUT, we evaluate it on a 3D simulated dataset of blobs and a 3D MRI dataset of mouse kidneys. We conduct a comparative analysis involving six state-of-the-art methods. Our findings reveal that BlobCUT exhibits superior performance and training efficiency, utilizing only 56.6% of the training time required by the state-of-the-art BlobDetGAN. This underscores the effectiveness of BlobCUT in accurately segmenting small blobs while achieving notable gains in training efficiency. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

Back to TopTop