Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (26,291)

Search Parameters:
Keywords = image processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 1913 KiB  
Article
Soft Contact Lens Engraving Characterization by Wavefront Holoscopy
by Rosa Vila-Andrés, José J. Esteve-Taboada and Vicente Micó
Sensors 2024, 24(11), 3492; https://doi.org/10.3390/s24113492 - 28 May 2024
Abstract
Permanent engravings on contact lenses provide information about the manufacturing process and lens positioning when they are placed on the eye. The inspection of their morphological characteristics is important, since they can affect the user’s comfort and deposit adhesion. Therefore, an inverted wavefront [...] Read more.
Permanent engravings on contact lenses provide information about the manufacturing process and lens positioning when they are placed on the eye. The inspection of their morphological characteristics is important, since they can affect the user’s comfort and deposit adhesion. Therefore, an inverted wavefront holoscope (a lensless microscope based on Gabor’s principle of in-line digital holography) is explored for the characterization of the permanent marks of soft contact lenses. The device, based on an in-line transmission configuration, uses a partially coherent laser source to illuminate the soft contact lens placed in a cuvette filled with a saline solution for lens preservation. Holograms were recorded on a digital sensor and reconstructed by back propagation to the image plane based on the angular spectrum method. In addition, a phase-retrieval algorithm was used to enhance the quality of the recovered images. The instrument was experimentally validated through a calibration process in terms of spatial resolution and thickness estimation, showing values that perfectly agree with those that were theoretically expected. Finally, phase maps of different engravings for three commercial soft contact lenses were successfully reconstructed, validating the inverted wavefront holoscope as a potential instrument for the characterization of the permanent marks of soft contact lenses. To improve the final image quality of reconstructions, the geometry of lenses should be considered to avoid induced aberration effects. Full article
(This article belongs to the Special Issue Digital Holography Imaging Techniques and Applications Using Sensors)
Show Figures

Figure 1

37 pages, 33980 KiB  
Article
“Codex 4D” Project: Interdisciplinary Investigations on Materials and Colors of De Balneis Puteolanis (Angelica Library, Rome, Ms. 1474)
by Eva Pietroni, Alessandra Botteon, David Buti, Alessandra Chirivì, Chiara Colombo, Claudia Conti, Anna Letizia Di Carlo, Donata Magrini, Fulvio Mercuri, Noemi Orazi and Marco Realini
Heritage 2024, 7(6), 2755-2791; https://doi.org/10.3390/heritage7060131 (registering DOI) - 28 May 2024
Viewed by 56
Abstract
This paper sheds light on the manufacturing processes, techniques, and materials used in the splendid illuminations of the oldest surviving copy of De Balneis Puteolanis, preserved at the Angelica Library in Rome (Ms. 1474). The codex is one of the masterpieces of mid-13th-century [...] Read more.
This paper sheds light on the manufacturing processes, techniques, and materials used in the splendid illuminations of the oldest surviving copy of De Balneis Puteolanis, preserved at the Angelica Library in Rome (Ms. 1474). The codex is one of the masterpieces of mid-13th-century Italian-Southern illumination, traditionally referred to as the commission of Manfredi, son of Frederick II. The findings reported in the article result from the interdisciplinary study conducted in 2021–2023 in the framework of “Codex 4D: journey in four dimensions into the manuscript”, a multidisciplinary project involving many competences and dealing with art-historical studies on manuscripts, diagnostic and conservative analyses, scientific dissemination, storytelling, and public engagement. The considerations we present aims at increasing the knowledge of book artefacts while respecting their extraordinary complexity; data from non-invasive diagnostic investigations (X-ray fluorescence, Vis-NIR reflectance and Raman spectroscopies, hyperspectral imaging, and multi-band imaging techniques as ultraviolet, reflectography, and thermography), carried out in situ with portable instruments on the book, have been integrated with observations resulting from the historical-artistic study, and the reading of some ancient treatises on the production and use of the pigments and dyes employed in illumination. Full article
Show Figures

Figure 1

50 pages, 6023 KiB  
Article
Carbon Dioxide Capture and Storage (CCS) in Saline Aquifers versus Depleted Gas Fields
by Richard H. Worden
Geosciences 2024, 14(6), 146; https://doi.org/10.3390/geosciences14060146 - 28 May 2024
Viewed by 81
Abstract
Saline aquifers have been used for CO2 storage as a dedicated greenhouse gas mitigation strategy since 1996. Depleted gas fields are now being planned for large-scale CCS projects. Although basalt host reservoirs are also going to be used, saline aquifers and depleted [...] Read more.
Saline aquifers have been used for CO2 storage as a dedicated greenhouse gas mitigation strategy since 1996. Depleted gas fields are now being planned for large-scale CCS projects. Although basalt host reservoirs are also going to be used, saline aquifers and depleted gas fields will make up most of the global geological repositories for CO2. At present, depleted gas fields and saline aquifers seem to be treated as if they are a single entity, but they have distinct differences that are examined here. Depleted gas fields have far more pre-existing information about the reservoir, top-seal caprock, internal architecture of the site, and about fluid flow properties than saline aquifers due to the long history of hydrocarbon project development and fluid production. The fluid pressure evolution paths for saline aquifers and depleted gas fields are distinctly different because, unlike saline aquifers, depleted gas fields are likely to be below hydrostatic pressure before CO2 injection commences. Depressurised depleted gas fields may require an initial injection of gas-phase CO2 instead of dense-phase CO2 typical of saline aquifers, but the greater pressure difference may allow higher initial injection rates in depleted gas fields than saline aquifers. Depressurised depleted gas fields may lead to CO2-injection-related stress paths that are distinct from saline aquifers depending on the geomechanical properties of the reservoir. CO2 trapping in saline aquifers will be dominated by buoyancy processes with residual CO2 and dissolved CO2 developing over time whereas depleted gas fields will be dominated by a sinking body of CO2 that forms a cushion below the remaining methane. Saline aquifers tend to have a relatively limited ability to fill pores with CO2 (i.e., low storage efficiency factors between 2 and 20%) as the injected CO2 is controlled by buoyancy and viscosity differences with the saline brine. In contrast, depleted gas fields may have storage efficiency factors up to 80% as the reservoir will contain sub-hydrostatic pressure methane that is easy to displace. Saline aquifers have a greater risk of halite-scale and minor dissolution of reservoir minerals than depleted gas fields as the former contain vastly more of the aqueous medium needed for such processes compared to the latter. Depleted gas fields have some different leakage risks than saline aquifers mostly related to the different fluid pressure histories, depressurisation-related alteration of geomechanical properties, and the greater number of wells typical of depleted gas fields than saline aquifers. Depleted gas fields and saline aquifers also have some different monitoring opportunities. The high-density, electrically conductive brine replaced by CO2 in saline aquifers permits seismic and resistivity imaging, but these forms of imaging are less feasible in depleted gas fields. Monitoring boreholes are less likely to be used in saline aquifers than depleted gas fields as the latter typically have numerous pre-existing exploration and production well penetrations. The significance of this analysis is that saline aquifers and depleted gas fields must be treated differently although the ultimate objective is the same: to permanently store CO2 to mitigate greenhouse gas emissions and minimise global heating. Full article
Show Figures

Figure 1

19 pages, 2674 KiB  
Article
Smartphone-Based Rapid Quantitative Detection Platform with Imprinted Polymer for Pb (II) Detection in Real Samples
by Flor de Liss Meza López, Christian Jacinto Hernández, Jaime Vega-Chacón, Juan C. Tuesta, Gino Picasso, Sabir Khan, María D. P. T. Sotomayor and Rosario López
Polymers 2024, 16(11), 1523; https://doi.org/10.3390/polym16111523 - 28 May 2024
Viewed by 77
Abstract
This paper reports the successful development and application of an efficient method for quantifying Pb2+ in aqueous samples using a smartphone-based colorimetric device with an imprinted polymer (IIP). The IIP was synthesized by modifying the previous study; using rhodizonate, 2-acrylamido-2-methylpropane sulfonic acid [...] Read more.
This paper reports the successful development and application of an efficient method for quantifying Pb2+ in aqueous samples using a smartphone-based colorimetric device with an imprinted polymer (IIP). The IIP was synthesized by modifying the previous study; using rhodizonate, 2-acrylamido-2-methylpropane sulfonic acid (AMPS), N,N′-methylenebisacrylamide (MBA), and potassium persulfate (KPS). The polymers were then characterized. An absorption study was performed to determine the optimal conditions for the smartphone-based colorimetric device processing. The device consists of a black box (10 × 10 × 10 cm), which was designed to ensure repeatability of the image acquisition. The methodology involved the use of a smartphone camera to capture images of IIP previously exposed at Pb2+ solutions with various concentrations, and color channel values were calculated (RGB, YMK HSVI). PLS multivariate regression was performed, and the optimum working range (0–10 mg L−1) was determined using seven principal components with a detection limit (LOD) of 0.215 mg L−1 and R2 = 0.998. The applicability of a colorimetric sensor in real samples showed a coefficient of variation (% RSD) of less than 9%, and inductively coupled plasma mass spectrometry (ICP-MS) was applied as the reference method. These results confirmed that the quantitation smartphone-based colorimetric sensor is a suitable analytical tool for reliable on-site Pb2+ monitoring. Full article
(This article belongs to the Special Issue Molecularly Imprinted Polymers: Latest Advances and Applications)
19 pages, 39124 KiB  
Article
Towards Generating Authentic Human-Removed Pictures in Crowded Places Using a Few-Second Video
by Juhwan Lee, Euihyeok Lee and Seungwoo Kang
Sensors 2024, 24(11), 3486; https://doi.org/10.3390/s24113486 - 28 May 2024
Viewed by 93
Abstract
If we visit famous and iconic landmarks, we may want to take a photo of them. However, such sites are usually crowded, and taking photos with only landmarks without people could be challenging. This paper aims to automatically remove people in a picture [...] Read more.
If we visit famous and iconic landmarks, we may want to take a photo of them. However, such sites are usually crowded, and taking photos with only landmarks without people could be challenging. This paper aims to automatically remove people in a picture and produce a natural image of the landmark alone. To this end, it presents Thanos, a system to generate authentic human-removed images in crowded places. It is designed to produce high-quality images with reasonable computation cost using short video clips of a few seconds. For this purpose, a multi-frame-based recovery region minimization method is proposed. The key idea is to aggregate information partially available from multiple image frames to minimize the area to be restored. The evaluation result presents that the proposed method outperforms alternatives; it shows lower Fréchet Inception Distance (FID) scores with comparable processing latency. It is also shown that the images by Thanos achieve a lower FID score than those of existing applications; Thanos’s score is 242.8, while those by Retouch-photos and Samsung object eraser are 249.4 and 271.2, respectively. Full article
(This article belongs to the Special Issue AI-Driven Sensing for Image Processing and Recognition)
Show Figures

Figure 1

28 pages, 12383 KiB  
Article
Greedy Ensemble Hyperspectral Anomaly Detection
by Mazharul Hossain, Mohammed Younis, Aaron Robinson, Lan Wang and Chrysanthe Preza
J. Imaging 2024, 10(6), 131; https://doi.org/10.3390/jimaging10060131 - 28 May 2024
Viewed by 91
Abstract
Hyperspectral images include information from a wide range of spectral bands deemed valuable for computer vision applications in various domains such as agriculture, surveillance, and reconnaissance. Anomaly detection in hyperspectral images has proven to be a crucial component of change and abnormality identification, [...] Read more.
Hyperspectral images include information from a wide range of spectral bands deemed valuable for computer vision applications in various domains such as agriculture, surveillance, and reconnaissance. Anomaly detection in hyperspectral images has proven to be a crucial component of change and abnormality identification, enabling improved decision-making across various applications. These abnormalities/anomalies can be detected using background estimation techniques that do not require the prior knowledge of outliers. However, each hyperspectral anomaly detection (HS-AD) algorithm models the background differently. These different assumptions may fail to consider all the background constraints in various scenarios. We have developed a new approach called Greedy Ensemble Anomaly Detection (GE-AD) to address this shortcoming. It includes a greedy search algorithm to systematically determine the suitable base models from HS-AD algorithms and hyperspectral unmixing for the first stage of a stacking ensemble and employs a supervised classifier in the second stage of a stacking ensemble. It helps researchers with limited knowledge of the suitability of the HS-AD algorithms for the application scenarios to select the best methods automatically. Our evaluation shows that the proposed method achieves a higher average F1-macro score with statistical significance compared to the other individual methods used in the ensemble. This is validated on multiple datasets, including the Airport–Beach–Urban (ABU) dataset, the San Diego dataset, the Salinas dataset, the Hydice Urban dataset, and the Arizona dataset. The evaluation using the airport scenes from the ABU dataset shows that GE-AD achieves a 14.97% higher average F1-macro score than our previous method (HUE-AD), at least 17.19% higher than the individual methods used in the ensemble, and at least 28.53% higher than the other state-of-the-art ensemble anomaly detection algorithms. As using the combination of greedy algorithm and stacking ensemble to automatically select suitable base models and associated weights have not been widely explored in hyperspectral anomaly detection, we believe that our work will expand the knowledge in this research area and contribute to the wider application of this approach. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Graphical abstract

44 pages, 4162 KiB  
Review
Object Tracking Using Computer Vision: A Review
by Pushkar Kadam, Gu Fang and Ju Jia Zou
Computers 2024, 13(6), 136; https://doi.org/10.3390/computers13060136 - 28 May 2024
Viewed by 125
Abstract
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image [...] Read more.
Object tracking is one of the most important problems in computer vision applications such as robotics, autonomous driving, and pedestrian movement. There has been a significant development in camera hardware where researchers are experimenting with the fusion of different sensors and developing image processing algorithms to track objects. Image processing and deep learning methods have significantly progressed in the last few decades. Different data association methods accompanied by image processing and deep learning are becoming crucial in object tracking tasks. The data requirement for deep learning methods has led to different public datasets that allow researchers to benchmark their methods. While there has been an improvement in object tracking methods, technology, and the availability of annotated object tracking datasets, there is still scope for improvement. This review contributes by systemically identifying different sensor equipment, datasets, methods, and applications, providing a taxonomy about the literature and the strengths and limitations of different approaches, thereby providing guidelines for selecting equipment, methods, and applications. Research questions and future scope to address the unresolved issues in the object tracking field are also presented with research direction guidelines. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

16 pages, 1657 KiB  
Article
Modeling of Ethiopian Beef Meat Marbling Score Using Image Processing for Rapid Meat Grading
by Tariku Erena, Abera Belay, Demelash Hailu, Bezuayehu Gutema Asefa, Mulatu Geleta and Tesfaye Deme
J. Imaging 2024, 10(6), 130; https://doi.org/10.3390/jimaging10060130 - 28 May 2024
Viewed by 148
Abstract
Meat characterized by a high marbling value is typically anticipated to display enhanced sensory attributes. This study aimed to predict the marbling scores of rib-eye, steaks sourced from the Longissimus dorsi muscle of different cattle types, namely Boran, Senga, and Sheko, by employing [...] Read more.
Meat characterized by a high marbling value is typically anticipated to display enhanced sensory attributes. This study aimed to predict the marbling scores of rib-eye, steaks sourced from the Longissimus dorsi muscle of different cattle types, namely Boran, Senga, and Sheko, by employing digital image processing and machine-learning algorithms. Marbling was analyzed using digital image processing coupled with an extreme gradient boosting (GBoost) machine learning algorithm. Meat texture was assessed using a universal texture analyzer. Sensory characteristics of beef were evaluated through quantitative descriptive analysis with a trained panel of twenty. Using selected image features from digital image processing, the marbling score was predicted with R2 (prediction) = 0.83. Boran cattle had the highest fat content in sirloin and chuck cuts (12.68% and 12.40%, respectively), followed by Senga (11.59% and 11.56%) and Sheko (11.40% and 11.17%). Tenderness scores for sirloin and chuck cuts differed among the three breeds: Boran (7.06 ± 2.75 and 3.81 ± 2.24, respectively), Senga (5.54 ± 1.90 and 5.25 ± 2.47), and Sheko (5.43 ± 2.76 and 6.33 ± 2.28 Nmm). Sheko and Senga had similar sensory attributes. Marbling scores were higher in Boran (4.28 ± 1.43 and 3.68 ± 1.21) and Senga (2.88 ± 0.69 and 2.83 ± 0.98) compared to Sheko (2.73 ± 1.28 and 2.90 ± 1.52). The study achieved a remarkable milestone in developing a digital tool for predicting marbling scores of Ethiopian beef breeds. Furthermore, the relationship between quality attributes and beef marbling score has been verified. After further validation, the output of this research can be utilized in the meat industry and quality control authorities. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

14 pages, 9006 KiB  
Article
Taoism-Net: A Fruit Tree Segmentation Model Based on Minimalism Design for UAV Camera
by Yanheng Mai, Jiaqi Zheng, Zefeng Luo, Chaoran Yu, Jianqiang Lu, Caili Yu, Zuanhui Lin and Zhongliang Liao
Agronomy 2024, 14(6), 1155; https://doi.org/10.3390/agronomy14061155 - 28 May 2024
Viewed by 72
Abstract
The development of precision agriculture requires unmanned aerial vehicles (UAVs) to collect diverse data, such as RGB images, 3D point clouds, and hyperspectral images. Recently, convolutional networks have made remarkable progress in downstream visual tasks, while often disregarding the trade-off between accuracy and [...] Read more.
The development of precision agriculture requires unmanned aerial vehicles (UAVs) to collect diverse data, such as RGB images, 3D point clouds, and hyperspectral images. Recently, convolutional networks have made remarkable progress in downstream visual tasks, while often disregarding the trade-off between accuracy and speed in UAV-based segmentation tasks. The study aims to provide further valuable insights using an efficient model named Taoism-Net. The findings include the following: (1) Prescription maps in agricultural UAVs requires pixel-level precise segmentation, with many focusing solely on accuracy at the expense of real-time processing capabilities, being incapable of satisfying the expectations of practical tasks. (2) Taoism-Net is a refreshingly segmented model, overcoming the challenges of complexity in deep learning, based on minimalist design, which is used to generate prescription maps through pixel level classification mapping of geodetic coordinates (the lychee tree aerial dataset in Guangdong is used for experiments). (3) Compared with mainstream lightweight models or mature segmentation algorithms, Taoism-Net achieves significant improvements, including an improvement of at least 4.8% in mIoU, and manifested a superior performance in the accuracy–latency curve. (4) “The greatest truths are concise” is a saying widely spread by ancient Taoism, indicating that the most fundamental approach is reflected through the utmost minimalism; moreover, Taoism-Net expects to a build bridge between academic research and industrial deployment, for example, UAVs in precision agriculture. Full article
(This article belongs to the Special Issue New Trends in Agricultural UAV Application)
Show Figures

Figure 1

14 pages, 12366 KiB  
Article
Enhancing Accuracy in Breast Density Assessment Using Deep Learning: A Multicentric, Multi-Reader Study
by Marek Biroš, Daniel Kvak, Jakub Dandár, Robert Hrubý, Eva Janů, Anora Atakhanova and Mugahed A. Al-antari
Diagnostics 2024, 14(11), 1117; https://doi.org/10.3390/diagnostics14111117 - 28 May 2024
Viewed by 86
Abstract
The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver [...] Read more.
The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736–0.903), along with an F1 score of 0.798 (0.594–0.905), precision of 0.806 (0.596–0.896), recall of 0.830 (0.650–0.946), and a Cohen’s Kappa (κ) of 0.708 (0.562–0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model’s competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes. Full article
Show Figures

Figure 1

27 pages, 18300 KiB  
Article
Statistical Analysis of Bubble Parameters from a Model Bubble Column with and without Counter-Current Flow
by P. Kováts and K. Zähringer
Fluids 2024, 9(6), 126; https://doi.org/10.3390/fluids9060126 - 28 May 2024
Viewed by 107
Abstract
Bubble columns are widely used in numerous industrial processes because of their advantages in operation, design, and maintenance compared to other multiphase reactor types. In contrast to their simple design, the generated flow conditions inside a bubble column reactor are quite complex, especially [...] Read more.
Bubble columns are widely used in numerous industrial processes because of their advantages in operation, design, and maintenance compared to other multiphase reactor types. In contrast to their simple design, the generated flow conditions inside a bubble column reactor are quite complex, especially in continuous mode with counter-current liquid flow. For the design and optimization of such reactors, precise numerical simulations and modelling are needed. These simulations and models have to be validated with experimental data. For this reason, experiments were carried out in a laboratory-scale bubble column using shadow imaging and particle image velocimetry (PIV) techniques with and without counter-current liquid flow. In the experiments, two types of gases—relatively poorly soluble air and well-soluble CO2—were used and the bubbles were generated with three different capillary diameters. With changing gas and liquid flow rates, overall, 108 different flow conditions were investigated. In addition to the liquid flow fields captured by PIV, shadow imaging data were also statistically evaluated in the measurement volume and bubble parameters such as bubble diameter, velocity, aspect ratio, bubble motion direction, and inclination. The bubble slip velocity was calculated from the measured liquid and bubble velocities. The analysis of these parameters shows that the counter-current liquid flow has a noticeable influence on the bubble parameters, especially on the bubble velocity and motion direction. In the case of CO2 bubbles, remarkable bubble shrinkage was observed with counter-current liquid flow due to the enhanced mass transfer. The results obtained for bubble aspect ratio are compared to known correlations from the literature. The comprehensive and extensive bubble data obtained in this study will now be used as a source for the development of correlations needed in the validation of numerical simulations and models. The data are available from the authors on request. Full article
(This article belongs to the Special Issue Mass Transfer in Multiphase Reactors)
Show Figures

Figure 1

17 pages, 17399 KiB  
Article
Evaluating Cellularity Estimation Methods: Comparing AI Counting with Pathologists’ Visual Estimates
by Tomoharu Kiyuna, Eric Cosatto, Kanako C. Hatanaka, Tomoyuki Yokose, Koji Tsuta, Noriko Motoi, Keishi Makita, Ai Shimizu, Toshiya Shinohara, Akira Suzuki, Emi Takakuwa, Yasunari Takakuwa, Takahiro Tsuji, Mitsuhiro Tsujiwaki, Mitsuru Yanai, Sayaka Yuzawa, Maki Ogura and Yutaka Hatanaka
Diagnostics 2024, 14(11), 1115; https://doi.org/10.3390/diagnostics14111115 - 28 May 2024
Viewed by 139
Abstract
The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurate genetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and [...] Read more.
The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurate genetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and eosin (H&E)-stained images has been found to vary between pathologists, making it an important challenge to obtain an accurate TCR. In this study, three pathologists exhaustively labeled all cells in 41 regions from 41 lung cancer cases as either tumor, non-tumor or indistinguishable, thus establishing a “gold standard” TCR. We then compared the accuracy of the TCR estimated by 13 pathologists based on visual assessment and the TCR calculated by an AI model that we have developed. It is a compact and fast model that follows a fully convolutional neural network architecture and produces cell detection maps which can be efficiently post-processed to obtain tumor and non-tumor cell counts from which TCR is calculated. Its raw cell detection accuracy is 92% while its classification accuracy is 84%. The results show that the error between the gold standard TCR and the AI calculation was significantly smaller than that between the gold standard TCR and the pathologist’s visual assessment (p<0.05). Additionally, the robustness of AI models across institutions is a key issue and we demonstrate that the variation in AI was smaller than that in the average of pathologists when evaluated by institution. These findings suggest that the accuracy of tumor cellularity assessments in clinical workflows is significantly improved by the introduction of robust AI models, leading to more efficient genetic testing and ultimately to better patient outcomes. Full article
(This article belongs to the Topic AI in Medical Imaging and Image Processing)
Show Figures

Figure 1

28 pages, 11761 KiB  
Article
Radiometric Infrared Thermography of Solar Photovoltaic Systems: An Explainable Predictive Maintenance Approach for Remote Aerial Diagnostic Monitoring
by Usamah Rashid Qureshi, Aiman Rashid, Nicola Altini, Vitoantonio Bevilacqua and Massimo La Scala
Smart Cities 2024, 7(3), 1261-1288; https://doi.org/10.3390/smartcities7030053 - 28 May 2024
Viewed by 158
Abstract
Solar photovoltaic (SPV) arrays are crucial components of clean and sustainable energy infrastructure. However, SPV panels are susceptible to thermal degradation defects that can impact their performance, thereby necessitating timely and accurate fault detection to maintain optimal energy generation. The considered case study [...] Read more.
Solar photovoltaic (SPV) arrays are crucial components of clean and sustainable energy infrastructure. However, SPV panels are susceptible to thermal degradation defects that can impact their performance, thereby necessitating timely and accurate fault detection to maintain optimal energy generation. The considered case study focuses on an intelligent fault detection and diagnosis (IFDD) system for the analysis of radiometric infrared thermography (IRT) of SPV arrays in a predictive maintenance setting, enabling remote inspection and diagnostic monitoring of the SPV power plant sites. The proposed IFDD system employs a custom-developed deep learning approach which relies on convolutional neural networks for effective multiclass classification of defect types. The diagnosis of SPV panels is a challenging task for issues such as IRT data scarcity, defect-patterns’ complexity, and low thermal image acquisition quality due to noise and calibration issues. Hence, this research carefully prepares a customized high-quality but severely imbalanced six-class thermographic radiometric dataset of SPV panels. With respect to previous approaches, numerical temperature values in floating-point are used to train and validate the predictive models. The trained models display high accuracy for efficient thermal anomaly diagnosis. Finally, to create a trust in the IFDD system, the process underlying the classification model is investigated with perceptive explainability, for portraying the most discriminant image features, and mathematical-structure-based interpretability, to achieve multiclass feature clustering. Full article
(This article belongs to the Special Issue Smart Electronics, Energy, and IoT Infrastructures for Smart Cities)
Show Figures

Figure 1

12 pages, 8913 KiB  
Communication
Study on the Robustness of an Atmospheric Scattering Model under Single Transmittance
by Xiaotian Shi, Yue Ming, Lin Ju and Shouqian Chen
Photonics 2024, 11(6), 515; https://doi.org/10.3390/photonics11060515 - 28 May 2024
Viewed by 126
Abstract
When light propagates in a scattering medium such as haze, it is partially scattered and absorbed, resulting in a decrease in the intensity of the light emitted by the imaging target and an increase in the intensity of the scattered light. This phenomenon [...] Read more.
When light propagates in a scattering medium such as haze, it is partially scattered and absorbed, resulting in a decrease in the intensity of the light emitted by the imaging target and an increase in the intensity of the scattered light. This phenomenon leads to a significant reduction in the quality of images taken in hazy environments. To describe the physical process of image degradation in haze, the atmospheric scattering model is proposed. However, the accuracy of the model applied to the usual fog image restoration is affected by many factors. In general, fog images, atmospheric light, and haze transmittances vary spatially, which makes it difficult to calculate the influence of the accuracy of parameters in the model on the recovery accuracy. In this paper, the atmospheric scattering model was applied to the restoration of hazed images with a single transmittance. We acquired hazed images with a single transmittance from 0.05 to 1 using indoor experiments. The dehazing stability of the atmospheric scattering model was investigated by adjusting the atmospheric light and transmittance parameters. For each transmittance, the relative recovery accuracy of atmospheric light and transmittance were calculated when they deviated from the optimal value of 0.1, respectively. The maximum parameter estimation deviations allowed us to obtain the best recovery accuracies of 90%, 80%, and 70%. Full article
Show Figures

Figure 1

17 pages, 1298 KiB  
Article
ICC-BiFormer: A Deep-Learning Model for Near-Earth Asteroid Detection via Image Compression and Local Feature Extraction
by Yiyang Guo, Yuan Liu and Ru Yang
Electronics 2024, 13(11), 2092; https://doi.org/10.3390/electronics13112092 - 28 May 2024
Viewed by 144
Abstract
Detecting near-Earth asteroids (NEAs) is crucial for research in solar system and planetary science. In recent year, deep-learning methods have almost dominated the task. Since NEAs represent only one-thousandth of the pixels in images, we proposed an ICC-BiFormer model that includes an image [...] Read more.
Detecting near-Earth asteroids (NEAs) is crucial for research in solar system and planetary science. In recent year, deep-learning methods have almost dominated the task. Since NEAs represent only one-thousandth of the pixels in images, we proposed an ICC-BiFormer model that includes an image compression and contrast enhancement block and a BiFormer model to capture local features in input images, which is different from previous models based on Convolutional Neural Network (CNN). Furthermore, we utilize a larger input size of the model, which corresponds to the side length of the input image matrix, and design a cropping algorithm to prevent NEAs from being truncated and better divide NEAs and satellites. We apply our ICC-BiFormer model into a dataset of approximately 20,000 streak and 40,000 non-streak images to train a binary classification model. The ICC-BiFormer achieves 99.88% accuracy, which is superior to existing models. Focusing on local features has been proven effective in detecting NEAs. Full article
Show Figures

Figure 1

Back to TopTop