Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (947)

Search Parameters:
Keywords = metadata

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2962 KiB  
Article
Standardization of Metadata of Analog Cadastral Documents Resulting from Systematic Cadaster Establishment
by Miodrag Roić and Doris Pivac
Land 2024, 13(9), 1343; https://doi.org/10.3390/land13091343 - 24 Aug 2024
Viewed by 226
Abstract
The systematic approach to the establishment of a cadaster in most European countries has resulted in a variety of cadastral documents. Most official cadastral data are from the 19th and 20th centuries and are stored as hard copies or electronic data in a [...] Read more.
The systematic approach to the establishment of a cadaster in most European countries has resulted in a variety of cadastral documents. Most official cadastral data are from the 19th and 20th centuries and are stored as hard copies or electronic data in a data warehouse, while the original documents are stored in analog format in separate locations, making the cadastral data difficult to access. The increasing interest in the use of archival cadastral documents has stimulated their digitalization in most countries, allowing users to access cadastral documents through metadata catalogs. Most catalogs use archival metadata standards to describe cadastral documents, with a lack of application of geoinformation metadata standards that represent fundamental spatial datasets. Archival metadata standards do not provide enough information about the origin and quality of cadastral data. The aim of this study was to examine the applicability of the ISO 19115-1 standard for describing cadastral documents. The methodology includes a comparison and an analysis of documents which are stored in different locations. The metadata of archived cadastral documents are recorded in archive inventories, and archives use different terminology for documents with the same content. The scientific contribution of this study is given by the classification of key documents and their associated properties that uniquely described each document. Four types of documents were classified by comparison, and we analyzed the content between documents. Property identification resulted in the semantic mapping to metadata elements of ISO 19115-1 and showed a considerable congruence of elements. It was possible to apply the ISO 19115-1 standard for describing documents of systematic cadaster establishment, with additional extensions for some elements. Proposed extensions to describe the cadastral documents include replacing free text with domains of appropriate values, adding stricter obligations, and restricting the use of domain values. The standardization of metadata for analog cadastral documents in archives has created a prerequisite for the development of a metadata catalog, which would increase the availability and accessibility of cadastral data for different user groups. Full article
Show Figures

Figure 1

25 pages, 19272 KiB  
Article
6DoF Object Pose and Focal Length Estimation from Single RGB Images in Uncontrolled Environments
by Mayura Manawadu and Soon-Yong Park
Sensors 2024, 24(17), 5474; https://doi.org/10.3390/s24175474 - 23 Aug 2024
Viewed by 493
Abstract
Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera [...] Read more.
Accurate 6DoF (degrees of freedom) pose and focal length estimation are important in extended reality (XR) applications, enabling precise object alignment and projection scaling, thereby enhancing user experiences. This study focuses on improving 6DoF pose estimation using single RGB images of unknown camera metadata. Estimating the 6DoF pose and focal length from an uncontrolled RGB image, obtained from the internet, is challenging because it often lacks crucial metadata. Existing methods such as FocalPose and Focalpose++ have made progress in this domain but still face challenges due to the projection scale ambiguity between the translation of an object along the z-axis (tz) and the camera’s focal length. To overcome this, we propose a two-stage strategy that decouples the projection scaling ambiguity in the estimation of z-axis translation and focal length. In the first stage, tz is set arbitrarily, and we predict all the other pose parameters and focal length relative to the fixed tz. In the second stage, we predict the true value of tz while scaling the focal length based on the tz update. The proposed two-stage method reduces projection scale ambiguity in RGB images and improves pose estimation accuracy. The iterative update rules constrained to the first stage and tailored loss functions including Huber loss in the second stage enhance the accuracy in both 6DoF pose and focal length estimation. Experimental results using benchmark datasets show significant improvements in terms of median rotation and translation errors, as well as better projection accuracy compared to the existing state-of-the-art methods. In an evaluation across the Pix3D datasets (chair, sofa, table, and bed), the proposed two-stage method improves projection accuracy by approximately 7.19%. Additionally, the incorporation of Huber loss resulted in a significant reduction in translation and focal length errors by 20.27% and 6.65%, respectively, in comparison to the Focalpose++ method. Full article
(This article belongs to the Special Issue Computer Vision and Virtual Reality: Technologies and Applications)
Show Figures

Figure 1

18 pages, 3360 KiB  
Article
Automated Quality Control Solution for Radiographic Imaging of Lung Diseases
by Christoph Kleefeld, Jorge Patricio Castillo Lopez, Paulo R. Costa, Isabelle Fitton, Ahmed Mohamed, Csilla Pesznyak, Ricardo Ruggeri, Ioannis Tsalafoutas, Ioannis Tsougos, Jeannie Hsiu Ding Wong, Urban Zdesar, Olivera Ciraj-Bjelac and Virginia Tsapaki
J. Clin. Med. 2024, 13(16), 4967; https://doi.org/10.3390/jcm13164967 - 22 Aug 2024
Viewed by 432
Abstract
Background/Objectives: Radiography is an essential and low-cost diagnostic method in pulmonary medicine that is used for the early detection and monitoring of lung diseases. An adequate and consistent image quality (IQ) is crucial to ensure accurate diagnosis and effective patient management. This [...] Read more.
Background/Objectives: Radiography is an essential and low-cost diagnostic method in pulmonary medicine that is used for the early detection and monitoring of lung diseases. An adequate and consistent image quality (IQ) is crucial to ensure accurate diagnosis and effective patient management. This pilot study evaluates the feasibility and effectiveness of the International Atomic Energy Agency (IAEA)’s remote and automated quality control (QC) methodology, which has been tested in multiple imaging centers. Methods: The data, collected between April and December 2022, included 47 longitudinal data sets from 22 digital radiographic units. Participants submitted metadata on the radiography setup, exposure parameters, and imaging modes. The database comprised 968 exposures, each representing multiple image quality parameters and metadata of image acquisition parameters. Python scripts were developed to collate, analyze, and visualize image quality data. Results: The pilot survey identified several critical issues affecting the future implementation of the IAEA method, as follows: (1) difficulty in accessing raw images due to manufacturer restrictions, (2) variability in IQ parameters even among identical X-ray systems and image acquisitions, (3) inconsistencies in phantom construction affecting IQ values, (4) vendor-dependent DICOM tag reporting, and (5) large variability in SNR values compared to other IQ metrics, making SNR less reliable for image quality assessment. Conclusions: Cross-comparisons among radiography systems must be taken with cautious because of the dependence on phantom construction and acquisition mode variations. Awareness of these factors will generate reliable and standardized quality control programs, which are crucial for accurate and fair evaluations, especially in high-frequency chest imaging. Full article
(This article belongs to the Section Pulmonology)
Show Figures

Figure 1

62 pages, 1897 KiB  
Review
Construction of Knowledge Graphs: Current State and Challenges
by Marvin Hofer, Daniel Obraczka, Alieh Saeedi, Hanna Köpcke and Erhard Rahm
Information 2024, 15(8), 509; https://doi.org/10.3390/info15080509 - 22 Aug 2024
Viewed by 367
Abstract
With Knowledge Graphs (KGs) at the center of numerous applications such as recommender systems and question-answering, the need for generalized pipelines to construct and continuously update such KGs is increasing. While the individual steps that are necessary to create KGs from unstructured sources [...] Read more.
With Knowledge Graphs (KGs) at the center of numerous applications such as recommender systems and question-answering, the need for generalized pipelines to construct and continuously update such KGs is increasing. While the individual steps that are necessary to create KGs from unstructured sources (e.g., text) and structured data sources (e.g., databases) are mostly well researched for their one-shot execution, their adoption for incremental KG updates and the interplay of the individual steps have hardly been investigated in a systematic manner so far. In this work, we first discuss the main graph models for KGs and introduce the major requirements for future KG construction pipelines. Next, we provide an overview of the necessary steps to build high-quality KGs, including cross-cutting topics such as metadata management, ontology development, and quality assurance. We then evaluate the state of the art of KG construction with respect to the introduced requirements for specific popular KGs, as well as some recent tools and strategies for KG construction. Finally, we identify areas in need of further research and improvement. Full article
(This article belongs to the Special Issue Knowledge Graph Technology and its Applications II)
Show Figures

Figure 1

22 pages, 5745 KiB  
Article
GenAI-Assisted Database Deployment for Heterogeneous Indigenous–Native Ethnographic Research Data
by Reen-Cheng Wang, David Yang, Ming-Che Hsieh, Yi-Cheng Chen and Weihsuan Lin
Appl. Sci. 2024, 14(16), 7414; https://doi.org/10.3390/app14167414 - 22 Aug 2024
Viewed by 305
Abstract
In ethnographic research, data collected through surveys, interviews, or questionnaires in the fields of sociology and anthropology often appear in diverse forms and languages. Building a powerful database system to store and process such data, as well as making good and efficient queries, [...] Read more.
In ethnographic research, data collected through surveys, interviews, or questionnaires in the fields of sociology and anthropology often appear in diverse forms and languages. Building a powerful database system to store and process such data, as well as making good and efficient queries, is very challenging. This paper extensively investigates modern database technology to find out what the best technologies to store these varied and heterogeneous datasets are. The study examines several database categories: traditional relational databases, the NoSQL family of key-value databases, graph databases, document databases, object-oriented databases and vector databases, crucial for the latest artificial intelligence solutions. The research proves that when it comes to field data, the NoSQL lineup is the most appropriate, especially document and graph databases. Simplicity and flexibility found in document databases and advanced ability to deal with complex queries and rich data relationships attainable with graph databases make these two types of NoSQL databases the ideal choice if a large amount of data has to be processed. Advancements in vector databases that embed custom metadata offer new possibilities for detailed analysis and retrieval. However, converting contents into vector data remains challenging, especially in regions with unique oral traditions and languages. Constructing such databases is labor-intensive and requires domain experts to define metadata and relationships, posing a significant burden for research teams with extensive data collections. To this end, this paper proposes using Generative AI (GenAI) to help in the data-transformation process, a recommendation that is supported by testing where GenAI has proven itself a strong supplement to document and graph databases. It also discusses two methods of vector database support that are currently viable, although each has drawbacks and benefits. Full article
(This article belongs to the Topic Innovation, Communication and Engineering)
Show Figures

Figure 1

20 pages, 8371 KiB  
Article
Decoding Urban Dynamics: Contextual Insights from Human Meta-Mobility Patterns
by Seokjoon Oh, Seungyoung Joo, Soohwan Kim and Minkyoung Kim
Systems 2024, 12(8), 313; https://doi.org/10.3390/systems12080313 - 21 Aug 2024
Viewed by 342
Abstract
Research on capturing human mobility patterns for efficient and sustainable urban planning has been widely conducted. However, studies that unveil spatial context beyond macro-level mobility patterns are relatively scarce. This study aims to analyze the spatiotemporal human meta-mobility patterns with rich context using [...] Read more.
Research on capturing human mobility patterns for efficient and sustainable urban planning has been widely conducted. However, studies that unveil spatial context beyond macro-level mobility patterns are relatively scarce. This study aims to analyze the spatiotemporal human meta-mobility patterns with rich context using POI data in Seoul from comprehensive perspectives. As a result, the floating population of Seoul exhibits regular and irregular cyclical mobility patterns on weekdays and weekends, respectively, stemming from the periodicity of the dominant POIs. Additionally, graph construction based on mobility similarity and their regional clustering show clusters vary by POIs but are generally divided into peripheral and central regions of Seoul. This indicates that socioeconomic factors cannot be ignored when understanding human mobility patterns. This helps to provide scientific evidence to support policy recommendations towards greenways and sustainable urban mobility systems, such as quantitative disparity of greenways, qualitative issues of greenways in the central areas, and inequality in cultural consumption. Addressing key considerations through targeted policies could significantly improve the overall quality of life for urban residents. We expect this study to lay the groundwork for future research that aims to understand realistic human mobility patterns with a rich context. Full article
(This article belongs to the Special Issue Greenways and Sustainable Urban Mobility Systems)
Show Figures

Figure 1

29 pages, 521 KiB  
Review
A Survey on the Use of Large Language Models (LLMs) in Fake News
by Eleftheria Papageorgiou, Christos Chronis, Iraklis Varlamis and Yassine Himeur
Future Internet 2024, 16(8), 298; https://doi.org/10.3390/fi16080298 - 19 Aug 2024
Viewed by 1438
Abstract
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall [...] Read more.
The proliferation of fake news and fake profiles on social media platforms poses significant threats to information integrity and societal trust. Traditional detection methods, including rule-based approaches, metadata analysis, and human fact-checking, have been employed to combat disinformation, but these methods often fall short in the face of increasingly sophisticated fake content. This review article explores the emerging role of Large Language Models (LLMs) in enhancing the detection of fake news and fake profiles. We provide a comprehensive overview of the nature and spread of disinformation, followed by an examination of existing detection methodologies. The article delves into the capabilities of LLMs in generating both fake news and fake profiles, highlighting their dual role as both a tool for disinformation and a powerful means of detection. We discuss the various applications of LLMs in text classification, fact-checking, verification, and contextual analysis, demonstrating how these models surpass traditional methods in accuracy and efficiency. Additionally, the article covers LLM-based detection of fake profiles through profile attribute analysis, network analysis, and behavior pattern recognition. Through comparative analysis, we showcase the advantages of LLMs over conventional techniques and present case studies that illustrate practical applications. Despite their potential, LLMs face challenges such as computational demands and ethical concerns, which we discuss in more detail. The review concludes with future directions for research and development in LLM-based fake news and fake profile detection, underscoring the importance of continued innovation to safeguard the authenticity of online information. Full article
Show Figures

Figure 1

20 pages, 19393 KiB  
Article
Integrating Multimodal Generative AI and Blockchain for Enhancing Generative Design in the Early Phase of Architectural Design Process
by Adam Fitriawijaya and Taysheng Jeng
Buildings 2024, 14(8), 2533; https://doi.org/10.3390/buildings14082533 - 16 Aug 2024
Viewed by 506
Abstract
Multimodal generative AI and generative design empower architects to create better-performing, sustainable, and efficient design solutions and explore diverse design possibilities. Blockchain technology ensures secure data management and traceability. This study aims to design and evaluate a framework that integrates blockchain into generative [...] Read more.
Multimodal generative AI and generative design empower architects to create better-performing, sustainable, and efficient design solutions and explore diverse design possibilities. Blockchain technology ensures secure data management and traceability. This study aims to design and evaluate a framework that integrates blockchain into generative AI-driven design drawing processes in architectural design to enhance authenticity and traceability. We employed a scenario as an example to integrate generative AI and blockchain into architectural designs by using a generative AI tool and leveraging multimodal generative AI to enhance design creativity by combining textual and visual inputs. These images were stored on blockchain systems, where metadata were attached to each image before being converted into NFT format, which ensured secure data ownership and management. This research exemplifies the pragmatic fusion of generative AI and blockchain technology applied in architectural design for more transparent, secure, and effective results in the early stages of the architectural design process. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

16 pages, 4743 KiB  
Review
Renewable Power Systems: A Comprehensive Meta-Analysis
by Aleksy Kwilinski, Oleksii Lyulyov and Tetyana Pimonenko
Energies 2024, 17(16), 3989; https://doi.org/10.3390/en17163989 - 12 Aug 2024
Viewed by 568
Abstract
The ongoing amplification of climate change necessitates the exploration and implementation of effective strategies to mitigate ecological issues while simultaneously preserving economic and social well-being. Renewable power systems offer a way to reduce adverse anthropogenic effects without hindering economic growth. This study aims [...] Read more.
The ongoing amplification of climate change necessitates the exploration and implementation of effective strategies to mitigate ecological issues while simultaneously preserving economic and social well-being. Renewable power systems offer a way to reduce adverse anthropogenic effects without hindering economic growth. This study aims to conduct a comprehensive bibliometric analysis of renewable power systems to explore their historical context, identify influential studies, and uncover research gaps, hypothesizing that global contributions and policy support significantly influence the field’s dynamics. Following Preferred Reporting Items For Systematic Reviews And Meta-Analyses guidelines, this study utilized Scopus tools analysis and VOSviewer 1.6.20 software to examine the metadata sourced from scientific databases in Scopus. The outcomes of this investigation facilitate the identification of the most prolific countries and authors, as well as collaborative efforts that enrich the theoretical landscape of renewable power systems. The study also traces the evolution of research on renewable power systems. Furthermore, the results reveal key scientific clusters in the analysis: the first cluster concentrates on renewable energy and sustainable development, the second on the relationship between government policies and renewable power systems, and the third on the role of incentives that catalyse the advancement of renewable power systems. The findings of this meta-analysis not only contribute valuable insights to existing research but also enable the identification of emerging research areas related to renewable power system development. Full article
(This article belongs to the Section C: Energy Economics and Policy)
Show Figures

Figure 1

12 pages, 7552 KiB  
Article
BacSPaD: A Robust Bacterial Strains’ Pathogenicity Resource Based on Integrated and Curated Genomic Metadata
by Sara Ribeiro, Guillaume Chaumet, Karine Alves, Julien Nourikyan, Lei Shi, Jean-Pierre Lavergne, Ivan Mijakovic, Simon de Bernard and Laurent Buffat
Pathogens 2024, 13(8), 672; https://doi.org/10.3390/pathogens13080672 - 9 Aug 2024
Viewed by 525
Abstract
The vast array of omics data in microbiology presents significant opportunities for studying bacterial pathogenesis and creating computational tools for predicting pathogenic potential. However, the field lacks a comprehensive, curated resource that catalogs bacterial strains and their ability to cause human infections. Current [...] Read more.
The vast array of omics data in microbiology presents significant opportunities for studying bacterial pathogenesis and creating computational tools for predicting pathogenic potential. However, the field lacks a comprehensive, curated resource that catalogs bacterial strains and their ability to cause human infections. Current methods for identifying pathogenicity determinants often introduce biases and miss critical aspects of bacterial pathogenesis. In response to this gap, we introduce BacSPaD (Bacterial Strains’ Pathogenicity Database), a thoroughly curated database focusing on pathogenicity annotations for a wide range of high-quality, complete bacterial genomes. Our rule-based annotation workflow combines metadata from trusted sources with automated keyword matching, extensive manual curation, and detailed literature review. Our analysis classified 5502 genomes as pathogenic to humans (HP) and 490 as non-pathogenic to humans (NHP), encompassing 532 species, 193 genera, and 96 families. Statistical analysis demonstrated a significant but moderate correlation between virulence factors and HP classification, highlighting the complexity of bacterial pathogenicity and the need for ongoing research. This resource is poised to enhance our understanding of bacterial pathogenicity mechanisms and aid in the development of predictive models. To improve accessibility and provide key visualization statistics, we developed a user-friendly web interface. Full article
(This article belongs to the Collection New Insights into Bacterial Pathogenesis)
Show Figures

Figure 1

23 pages, 5009 KiB  
Review
A Review on Reinforcement Learning in Production Scheduling: An Inferential Perspective
by Vladimir Modrak, Ranjitharamasamy Sudhakarapandian, Arunmozhi Balamurugan and Zuzana Soltysova
Algorithms 2024, 17(8), 343; https://doi.org/10.3390/a17080343 - 7 Aug 2024
Viewed by 526
Abstract
In this study, a systematic review on production scheduling based on reinforcement learning (RL) techniques using especially bibliometric analysis has been carried out. The aim of this work is, among other things, to point out the growing interest in this domain and to [...] Read more.
In this study, a systematic review on production scheduling based on reinforcement learning (RL) techniques using especially bibliometric analysis has been carried out. The aim of this work is, among other things, to point out the growing interest in this domain and to outline the influence of RL as a type of machine learning on production scheduling. To achieve this, the paper explores production scheduling using RL by investigating the descriptive metadata of pertinent publications contained in Scopus, ScienceDirect, and Google Scholar databases. The study focuses on a wide spectrum of publications spanning the years between 1996 and 2024. The findings of this study can serve as new insights for future research endeavors in the realm of production scheduling using RL techniques. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
Show Figures

Figure 1

17 pages, 9468 KiB  
Article
The Marine Macroalgae Collection from Herbarium João de Carvalho e Vasconcellos (LISI)—140 Years of History
by João Canilho Santos, Paula Paes, Pedro Arsénio, Rui Figueira, José Carlos Costa, Margarida Dionísio Lopes, Helena Cotrim and Dalila Espírito-Santo
Diversity 2024, 16(8), 478; https://doi.org/10.3390/d16080478 - 7 Aug 2024
Viewed by 395
Abstract
Herbaria phycological collections have approximately one million 700 thousand specimens preserved in European herbaria, a significantly lower number when compared to vascular plants, due to factors such as greater sampling difficulty and fewer specialists. Several studies report that coastal systems have undergone dramatic [...] Read more.
Herbaria phycological collections have approximately one million 700 thousand specimens preserved in European herbaria, a significantly lower number when compared to vascular plants, due to factors such as greater sampling difficulty and fewer specialists. Several studies report that coastal systems have undergone dramatic ecological changes in the last 150 years, with macroalgae being a particularly affected group. Thus, macroalgal herbaria are essential sources for the study and conservation of this biodiversity, as well as a pillar that responds to several ecological questions. Despite having a large coastline, Portugal’s phycological collections are scarce, poorly developed, and practically inaccessible digitally. In 2021/2022, all the phycological specimens present at LISI were the focus of this exploratory project whose objective was to catalog them, taxonomically review the specimens and place them at the service of the scientific community through the incorporation of digitized vouchers into online databases. Three marine collections were constituted and studied, accounting for a total of 852 vouchers and more than 1800 specimens, being the Portuguese Marine Macroalgae Collection, the oldest digitized phycological collection available in Portugal. This project provides an opportunity for other educational institutions to embrace their long-neglected collections as well. Full article
(This article belongs to the Special Issue Herbaria: A Key Resource for Plant Diversity Exploration)
Show Figures

Figure 1

19 pages, 5156 KiB  
Article
A Cyborg Walk for Urban Analysis? From Existing Walking Methodologies to the Integration of Machine Learning
by Nicolás Valenzuela-Levi, Nicolás Gálvez Ramírez, Cristóbal Nilo, Javiera Ponce-Méndez, Werner Kristjanpoller, Marcos Zúñiga and Nicolás Torres
Land 2024, 13(8), 1211; https://doi.org/10.3390/land13081211 - 6 Aug 2024
Viewed by 574
Abstract
Although walking methodologies (WMs) and machine learning (ML) have been objects of interest for urban scholars, it is difficult to find research that integrates both. We propose a ‘cyborg walk’ method and apply it to studying litter in public spaces. Walking routes are [...] Read more.
Although walking methodologies (WMs) and machine learning (ML) have been objects of interest for urban scholars, it is difficult to find research that integrates both. We propose a ‘cyborg walk’ method and apply it to studying litter in public spaces. Walking routes are created based on an unsupervised learning algorithm (k-means) to classify public spaces. Then, a deep learning model (YOLOv5) is used to collect data from geotagged photos taken by an automatic Insta360 X3 camera worn by human walkers. Results from image recognition have an accuracy between 83.7% and 95%, which is similar to what is validated by the literature. The data collected by the machine are automatically georeferenced thanks to the metadata generated by a GPS attached to the camera. WMs could benefit from the introduction of ML for informative route optimisation and georeferenced visual data quantification. The links between these findings and the existing WM literature are discussed, reflecting on the parallels between this ‘cyborg walk’ experiment and the seminal cyborg metaphor proposed by Donna Haraway. Full article
(This article belongs to the Special Issue GeoAI for Urban Sustainability Monitoring and Analysis)
Show Figures

Figure 1

15 pages, 7277 KiB  
Article
Leak Event Diagnosis for Power Plants: Generative Anomaly Detection Using Prototypical Networks
by Jaehyeok Jeong, Doyeob Yeo, Seungseo Roh, Yujin Jo and Minsuk Kim
Sensors 2024, 24(15), 4991; https://doi.org/10.3390/s24154991 - 1 Aug 2024
Viewed by 479
Abstract
Anomaly detection systems based on artificial intelligence (AI) have demonstrated high performance and efficiency in a wide range of applications such as power plants and smart factories. However, due to the inherent reliance of AI systems on the quality of training data, they [...] Read more.
Anomaly detection systems based on artificial intelligence (AI) have demonstrated high performance and efficiency in a wide range of applications such as power plants and smart factories. However, due to the inherent reliance of AI systems on the quality of training data, they still demonstrate poor performance in certain environments. Especially in hazardous facilities with constrained data collection, deploying these systems remains a challenge. In this paper, we propose Generative Anomaly Detection using Prototypical Networks (GAD-PN) designed to detect anomalies using only a limited number of normal samples. GAD-PN is a structure that integrates CycleGAN with Prototypical Networks (PNs), learning from metadata similar to the target environment. This approach enables the collection of data that are difficult to gather in real-world environments by using simulation or demonstration models, thus providing opportunities to learn a variety of environmental parameters under ideal and normal conditions. During the inference phase, PNs can classify normal and leak samples using only a small number of normal data from the target environment by prototypes that represent normal and abnormal features. We also complement the challenge of collecting anomaly data by generating anomaly data from normal data using CycleGAN trained on anomaly features. It can also be adapted to various environments that have similar anomalous scenarios, regardless of differences in environmental parameters. To validate the proposed structure, data were collected specifically targeting pipe leakage scenarios, which are significant problems in environments such as power plants. In addition, acoustic ultrasound signals were collected from the pipe nozzles in three different environments. As a result, the proposed model achieved a leak detection accuracy of over 90% in all environments, even with only a small number of normal data. This performance shows an average improvement of approximately 30% compared with traditional unsupervised learning models trained with a limited dataset. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

19 pages, 3700 KiB  
Article
The Identification of Leidenfrost Phenomenon Formation on TiO2-Coated Surfaces and the Modelling of Heat Transfer Processes
by Monika Maziukienė, Nerijus Striūgas, Lina Vorotinskienė, Raminta Skvorčinskienė and Marius Urbonavičius
Materials 2024, 17(15), 3687; https://doi.org/10.3390/ma17153687 - 25 Jul 2024
Viewed by 439
Abstract
Experiments on specimen cooling dynamics and possible film boiling around a body are very important in various industrial applications, such as nucleate boiling, to decrease drag reduction or achieve better surface properties in coating technologies. The objective of this study was to investigate [...] Read more.
Experiments on specimen cooling dynamics and possible film boiling around a body are very important in various industrial applications, such as nucleate boiling, to decrease drag reduction or achieve better surface properties in coating technologies. The objective of this study was to investigate the interaction between the heat transfer processes and cooling dynamics of a sample in different boundary conditions. This article presents new experimental data on specimens coated with Al–TiO2 film and Leidenfrost phenomenon (LP) formation on the film’s surface. Furthermore, this manuscript presents numerical heat and mass transfer parameter results. The comparative analysis of new experiments on Al–TiO2 film specimens and other coatings such as polished aluminium, Al–MgO, Al–MgH2 and Al–TiH2 provides further detail on oxide and hydride materials. In the experimental cooling dynamics experiments, specimens were heated up to 450 °C, while the sub-cooling water temperatures were 14*‒20 °C (room temperature), 40 °C and 60 °C. The specimens’ cooling dynamics were calculated by applying Newton’s cooling law, and heat transfer was estimated by calculating the heat flux q transferred from the specimens’ surface and the Bi parameter. The metadata results from the performed experiments were used to numerically model the cooling dynamics curves for different material specimens. Approximated polynomial equations are proposed for the polished aluminium, Al–TiO2, Al–MgO, Al–MgH2 and Al–TiH2 materials. The provided comparative analysis makes it possible to see the differences between oxides and hydrides and to choose materials for practical application in the industrial sector. The presented results could also be used in software packages to model heat transfer processes. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

Back to TopTop