Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (544)

Search Parameters:
Keywords = algorithmic fairness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3565 KiB  
Article
A Multi-Objective Framework for Balancing Fairness and Accuracy in Debiasing Machine Learning Models
by Rashmi Nagpal, Ariba Khan, Mihir Borkar and Amar Gupta
Mach. Learn. Knowl. Extr. 2024, 6(3), 2130-2148; https://doi.org/10.3390/make6030105 (registering DOI) - 20 Sep 2024
Viewed by 365
Abstract
Machine learning algorithms significantly impact decision-making in high-stakes domains, necessitating a balance between fairness and accuracy. This study introduces an in-processing, multi-objective framework that leverages the Reject Option Classification (ROC) algorithm to simultaneously optimize fairness and accuracy while safeguarding protected attributes such as [...] Read more.
Machine learning algorithms significantly impact decision-making in high-stakes domains, necessitating a balance between fairness and accuracy. This study introduces an in-processing, multi-objective framework that leverages the Reject Option Classification (ROC) algorithm to simultaneously optimize fairness and accuracy while safeguarding protected attributes such as age and gender. Our approach seeks a multi-objective optimization solution that balances accuracy, group fairness loss, and individual fairness loss. The framework integrates fairness objectives without relying on a weighted summation method, instead focusing on directly optimizing the trade-offs. Empirical evaluations on publicly available datasets, including German Credit, Adult Income, and COMPAS, reveal several significant findings: the ROC-based approach demonstrates superior performance, achieving an accuracy of 94.29%, an individual fairness loss of 0.04, and a group fairness loss of 0.06 on the German Credit dataset. These results underscore the effectiveness of our framework, particularly the ROC component, in enhancing both the fairness and performance of machine learning models. Full article
Show Figures

Figure 1

16 pages, 1639 KiB  
Article
Post-Quantum Delegated Proof of Luck for Blockchain Consensus Algorithm
by Hyunjun Kim, Wonwoong Kim, Yeajun Kang, Hyunji Kim and Hwajeong Seo
Appl. Sci. 2024, 14(18), 8394; https://doi.org/10.3390/app14188394 - 18 Sep 2024
Viewed by 561
Abstract
The advancements in quantum computing and the potential for polynomial-time solutions to traditional public key cryptography (i.e., Rivest–Shamir–Adleman (RSA) and elliptic-curve cryptography (ECC)) using Shor’s algorithm pose a serious threat to the security of pre-quantum blockchain technologies. This paper proposes an efficient quantum-safe [...] Read more.
The advancements in quantum computing and the potential for polynomial-time solutions to traditional public key cryptography (i.e., Rivest–Shamir–Adleman (RSA) and elliptic-curve cryptography (ECC)) using Shor’s algorithm pose a serious threat to the security of pre-quantum blockchain technologies. This paper proposes an efficient quantum-safe blockchain that incorporates new quantum-safe consensus algorithms. We integrate post-quantum signature schemes into the blockchain’s transaction signing and verification processes to enhance resistance against quantum attacks. Specifically, we employ the Falcon signature scheme, which was selected during the NIST post-quantum cryptography (PQC) standardization process. Although the integration of the post-quantum signature scheme results in a reduction in the blockchain’s transactions per second (TPSs), we introduce efficient approaches to mitigate this performance degradation. Our proposed post-quantum delegated proof of luck (PQ-DPoL) combines a proof of luck (PoL) mechanism with a delegated approach, ensuring quantum resistance, energy efficiency, and fairness in block generation. Experimental results demonstrate that while post-quantum cryptographic algorithms like Falcon introduce larger signature sizes and slower processing times, the PQ-DPoL algorithm effectively balances security and performance, providing a viable solution for secure blockchain operations in a post-quantum era. Full article
(This article belongs to the Special Issue Blockchain and Intelligent Networking for Smart Applications)
Show Figures

Figure 1

27 pages, 1181 KiB  
Article
Joint Resource Scheduling of the Time Slot, Power, and Main Lobe Direction in Directional UAV Ad Hoc Networks: A Multi-Agent Deep Reinforcement Learning Approach
by Shijie Liang, Haitao Zhao, Li Zhou, Zhe Wang, Kuo Cao and Junfang Wang
Drones 2024, 8(9), 478; https://doi.org/10.3390/drones8090478 - 12 Sep 2024
Viewed by 264
Abstract
Directional unmanned aerial vehicle (UAV) ad hoc networks (DUANETs) are widely applied due to their high flexibility, strong anti-interference capability, and high transmission rates. However, within directional networks, complex mutual interference persists, necessitating scheduling of the time slot, power, and main lobe direction [...] Read more.
Directional unmanned aerial vehicle (UAV) ad hoc networks (DUANETs) are widely applied due to their high flexibility, strong anti-interference capability, and high transmission rates. However, within directional networks, complex mutual interference persists, necessitating scheduling of the time slot, power, and main lobe direction for all links to improve the transmission performance of DUANETs. To ensure transmission fairness and the total count of transmitted data packets for the DUANET under dynamic data transmission demands, a scheduling algorithm for the time slot, power, and main lobe direction based on multi-agent deep reinforcement learning (MADRL) is proposed. Specifically, modeling is performed with the links as the core, optimizing the time slot, power, and main lobe direction variables for the fairness-weighted count of transmitted data packets. A decentralized partially observable Markov decision process (Dec-POMDP) is constructed for the problem. To process the observation in Dec-POMDP, an attention mechanism-based observation processing method is proposed to extract observation features of UAVs and their neighbors within the main lobe range, enhancing algorithm performance. The proposed Dec-POMDP and MADRL algorithms enable distributed autonomous decision-making for the resource scheduling of time slots, power, and main lobe directions. Finally, the simulation and analysis are primarily focused on the performance of the proposed algorithm and existing algorithms across varying data packet generation rates, different main lobe gains, and varying main lobe widths. The simulation results show that the proposed attention mechanism-based MADRL algorithm enhances the performance of the MADRL algorithm by 22.17%. The algorithm with the main lobe direction scheduling improves performance by 67.06% compared to the algorithm without the main lobe direction scheduling. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

14 pages, 224 KiB  
Article
China’s Legal Practices Concerning Challenges of Artificial General Intelligence
by Bing Chen and Jiaying Chen
Laws 2024, 13(5), 60; https://doi.org/10.3390/laws13050060 - 12 Sep 2024
Viewed by 401
Abstract
The artificial general intelligence (AGI) industry, represented by ChatGPT, has impacted social order during its development, and also brought various risks and challenges, such as ethical concerns in science and technology, attribution of liability, intellectual property monopolies, data security, and algorithm manipulation. The [...] Read more.
The artificial general intelligence (AGI) industry, represented by ChatGPT, has impacted social order during its development, and also brought various risks and challenges, such as ethical concerns in science and technology, attribution of liability, intellectual property monopolies, data security, and algorithm manipulation. The development of AI is currently facing a crisis of trust. Therefore, the governance of the AGI industry must be prioritized, and the opportunity for the implementation of the Interim Administrative Measures for Generative Artificial Intelligence Services should be taken. It is necessary to enhance the norms for the supervision and management of scientific and technological ethics within the framework of the rule of law. Additionally, it is also essential to continuously improve the regulatory system for liability, balance the dual values of fair competition and innovation encouragement, and strengthen data-security protection systems in the field of AI. All of these will enable coordinated governance across multiple domains, stakeholders, systems, and tools. Full article
28 pages, 7828 KiB  
Article
Reinforcement Learning for Fair and Efficient Charging Coordination for Smart Grid
by Amr A. Elshazly, Mahmoud M. Badr, Mohamed Mahmoud, William Eberle, Maazen Alsabaan and Mohamed I. Ibrahem
Energies 2024, 17(18), 4557; https://doi.org/10.3390/en17184557 - 11 Sep 2024
Viewed by 704
Abstract
The integration of renewable energy sources, such as rooftop solar panels, into smart grids poses significant challenges for managing customer-side battery storage. In response, this paper introduces a novel reinforcement learning (RL) approach aimed at optimizing the coordination of these batteries. Our approach [...] Read more.
The integration of renewable energy sources, such as rooftop solar panels, into smart grids poses significant challenges for managing customer-side battery storage. In response, this paper introduces a novel reinforcement learning (RL) approach aimed at optimizing the coordination of these batteries. Our approach utilizes a single-agent, multi-environment RL system designed to balance power saving, customer satisfaction, and fairness in power distribution. The RL agent dynamically allocates charging power while accounting for individual battery levels and grid constraints, employing an actor–critic algorithm. The actor determines the optimal charging power based on real-time conditions, while the critic iteratively refines the policy to enhance overall performance. The key advantages of our approach include: (1) Adaptive Power Allocation: The RL agent effectively reduces overall power consumption by optimizing grid power allocation, leading to more efficient energy use. (2) Enhanced Customer Satisfaction: By increasing the total available power from the grid, our approach significantly reduces instances of battery levels falling below the critical state of charge (SoC), thereby improving customer satisfaction. (3) Fair Power Distribution: Fairness improvements are notable, with the highest fair reward rising by 173.7% across different scenarios, demonstrating the effectiveness of our method in minimizing discrepancies in power distribution. (4) Improved Total Reward: The total reward also shows a significant increase, up by 94.1%, highlighting the efficiency of our RL-based approach. Experimental results using a real-world dataset confirm that our RL approach markedly improves fairness, power efficiency, and customer satisfaction, underscoring its potential for optimizing smart grid operations and energy management systems. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

33 pages, 1632 KiB  
Article
On the Fairness of Internet Congestion Control over WiFi with Deep Reinforcement Learning
by Shyam Kumar Shrestha, Shiva Raj Pokhrel and Jonathan Kua
Future Internet 2024, 16(9), 330; https://doi.org/10.3390/fi16090330 - 10 Sep 2024
Viewed by 784
Abstract
For over forty years, TCP has been the main protocol for transporting data on the Internet. To improve congestion control algorithms (CCAs), delay bounding algorithms such as Vegas, FAST, BBR, PCC, and Copa have been developed. However, despite being designed to ensure fairness [...] Read more.
For over forty years, TCP has been the main protocol for transporting data on the Internet. To improve congestion control algorithms (CCAs), delay bounding algorithms such as Vegas, FAST, BBR, PCC, and Copa have been developed. However, despite being designed to ensure fairness between data flows, these CCAs can still lead to unfairness and, in some cases, even cause data flow starvation in WiFi networks under certain conditions. We propose a new CCA switching solution that works with existing TCP and WiFi standards. This solution is offline and uses Deep Reinforcement Learning (DRL) trained on features such as noncongestive delay variations to predict and prevent extreme unfairness and starvation. Our DRL-driven approach allows for dynamic and efficient CCA switching. We have tested our design preliminarily in realistic datasets, ensuring that they support both fairness and efficiency over WiFi networks, which requires further investigation and extensive evaluation before online deployment. Full article
Show Figures

Figure 1

14 pages, 2628 KiB  
Article
Fed-RHLP: Enhancing Federated Learning with Random High-Local Performance Client Selection for Improved Convergence and Accuracy
by Pramote Sittijuk and Kreangsak Tamee
Symmetry 2024, 16(9), 1181; https://doi.org/10.3390/sym16091181 - 9 Sep 2024
Viewed by 350
Abstract
We introduce the random high-local performance client selection strategy, termed Fed-RHLP. This approach allows opportunities for higher-performance clients to contribute more significantly by updating and sharing their local models for global aggregation. Nevertheless, it also enables lower-performance clients to participate collaboratively based on [...] Read more.
We introduce the random high-local performance client selection strategy, termed Fed-RHLP. This approach allows opportunities for higher-performance clients to contribute more significantly by updating and sharing their local models for global aggregation. Nevertheless, it also enables lower-performance clients to participate collaboratively based on their proportional representation determined by the probability of their local performance on the roulette wheel (RW). Improving symmetry in federated learning involves IID Data: symmetry is naturally present, making model updates easier to aggregate and Non-IID Data: asymmetries can impact performance and fairness. Solutions include data balancing, adaptive algorithms, and robust aggregation methods. Fed-RHLP enhances federated learning by allowing lower-performance clients to contribute based on their proportional representation, which is determined by their local performance. This fosters inclusivity and collaboration in both IID and Non-IID scenarios. In this work, through experiments, we demonstrate that Fed-RHLP offers accelerated convergence speed and improved accuracy in aggregating the final global model, effectively mitigating challenges posed by both IID and Non-IID Data distribution scenarios. Full article
Show Figures

Figure 1

24 pages, 3763 KiB  
Article
Intelligent Fuzzy Traffic Signal Control System for Complex Intersections Using Fuzzy Rule Base Reduction
by Tamrat D. Chala and László T. Kóczy
Symmetry 2024, 16(9), 1177; https://doi.org/10.3390/sym16091177 - 9 Sep 2024
Viewed by 666
Abstract
In this study, the concept of symmetry is employed to implement an intelligent fuzzy traffic signal control system for complex intersections. This approach suggests that the implementation of reduced fuzzy rules through the reduction method, without compromising the performance of the original fuzzy [...] Read more.
In this study, the concept of symmetry is employed to implement an intelligent fuzzy traffic signal control system for complex intersections. This approach suggests that the implementation of reduced fuzzy rules through the reduction method, without compromising the performance of the original fuzzy rule base, constitutes a symmetrical approach. In recent decades, urban and city traffic congestion has become a significant issue because of the time lost as a result of heavy traffic, which negatively affects economic productivity and efficiency and leads to energy loss, and also because of the heavy environmental pollution effect. In addition, traffic congestion prevents an immediate response by the ambulance, police, and fire brigades to urgent events. To mitigate these problems, a three-stage intelligent and flexible fuzzy traffic control system for complex intersections, using a novel hybrid reduction approach was proposed. The three-stage fuzzy traffic control system performs four primary functions. The first stage prioritizes emergency car(s) and identifies the degree of urgency of the traffic conditions in the red-light phase. The second stage guarantees a fair distribution of green-light durations even for periods of extremely unbalanced traffic with long vehicle queues in certain directions and, especially, when heavy traffic is loaded for an extended period in one direction and the short vehicle queues in the conflicting directions require passing in a reasonable time. The third stage adjusts the green-light time to the traffic conditions, to the appearance of one or more emergency car(s), and to the overall waiting times of the other vehicles by using a fuzzy inference engine. The original complete fuzzy rule base set up by listing all possible input combinations was reduced using a novel hybrid reduction algorithm for fuzzy rule bases, which resulted in a significant reduction of the original base, namely, by 72.1%. The proposed novel approach, including the model and the hybrid reduction algorithm, were implemented and simulated using Python 3.9 and SUMO (version 1.14.1). Subsequently, the obtained fuzzy rule system was compared in terms of running time and efficiency with a traffic control system using the original fuzzy rules. The results showed that the reduced fuzzy rule base had better results in terms of the average waiting time, calculated fuel consumption, and CO2 emission. Furthermore, the fuzzy traffic control system with reduced fuzzy rules performed better as it required less execution time and thus lower computational costs. Summarizing the above results, it may be stated that this new approach to intersection traffic light control is a practical solution for managing complex traffic conditions at lower computational costs. Full article
(This article belongs to the Special Issue Symmetry in Optimization and Control with Real World Applications II)
Show Figures

Figure 1

23 pages, 1626 KiB  
Article
Is Reinforcement Learning Good at American Option Valuation?
by Peyman Kor, Reidar B. Bratvold and Aojie Hong
Algorithms 2024, 17(9), 400; https://doi.org/10.3390/a17090400 - 7 Sep 2024
Viewed by 360
Abstract
This paper investigates algorithms for identifying the optimal policy for pricing American Options. The American Option pricing is reformulated as a Sequential Decision-Making problem with two binary actions (Exercise or Continue), transforming it into an optimal stopping time problem. Both the least square [...] Read more.
This paper investigates algorithms for identifying the optimal policy for pricing American Options. The American Option pricing is reformulated as a Sequential Decision-Making problem with two binary actions (Exercise or Continue), transforming it into an optimal stopping time problem. Both the least square Monte Carlo simulation method (LSM) and Reinforcement Learning (RL)-based methods were utilized to find the optimal policy and, hence, the fair value of the American Put Option. Both Classical Geometric Brownian Motion (GBM) and calibrated Stochastic Volatility models served as the underlying uncertain assets. The novelty of this work lies in two aspects: (1) Applying LSM- and RL-based methods to determine option prices, with a specific focus on analyzing the dynamics of “Decisions” made by each method and comparing final decisions chosen by the LSM and RL methods. (2) Assess how the RL method updates “Decisions” at each batch, revealing the evolution of the decisions during the learning process to achieve optimal policy. Full article
Show Figures

Figure 1

14 pages, 4458 KiB  
Article
Comparison of Different HIV-1 Resistance Interpretation Tools for Next-Generation Sequencing in Italy
by Daniele Armenia, Luca Carioti, Valeria Micheli, Isabella Bon, Tiziano Allice, Celestino Bonura, Bianca Bruzzone, Fiorenza Bracchitta, Francesco Cerutti, Giovanni Maurizio Giammanco, Federica Stefanelli, Maria Addolorata Bonifacio, Ada Bertoli, Marialinda Vatteroni, Gabriele Ibba, Federica Novazzi, Maria Rosaria Lipsi, Nunzia Cuomo, Ilaria Vicenti, Francesca Ceccherini-Silberstein, Barbara Rossetti, Antonia Bezenchek, Francesco Saladini, Maurizio Zazzi and Maria Mercedes Santoroadd Show full author list remove Hide full author list
Viruses 2024, 16(9), 1422; https://doi.org/10.3390/v16091422 - 6 Sep 2024
Cited by 1 | Viewed by 430
Abstract
Background: Next-generation sequencing (NGS) is gradually replacing Sanger sequencing for HIV genotypic drug resistance testing (GRT). This work evaluated the concordance among different NGS-GRT interpretation tools in a real-life setting. Methods: Routine NGS-GRT data were generated from viral RNA at 11 Italian laboratories [...] Read more.
Background: Next-generation sequencing (NGS) is gradually replacing Sanger sequencing for HIV genotypic drug resistance testing (GRT). This work evaluated the concordance among different NGS-GRT interpretation tools in a real-life setting. Methods: Routine NGS-GRT data were generated from viral RNA at 11 Italian laboratories with the AD4SEQ HIV-1 Solution v2 commercial kit. NGS results were interpreted by the SmartVir system provided by the kit and by two online tools (HyDRA Web and Stanford HIVdb). NGS-GRT was considered valid when the coverage was >100 reads (100×) at each PR/RT/IN resistance-associated position listed in the HIVdb 9.5.1 algorithm. Results: Among 629 NGS-GRT, 75.2%, 74.2%, and 70.9% were valid according to SmartVir, HyDRA Web, and HIVdb. Considering at least two interpretation tools, 463 (73.6%) NGS-GRT had a valid coverage for resistance analyses. The proportion of valid samples was affected by viremia <10,000–1000 copies/mL and non-B subtypes. Mutations at an NGS frequency >10% showed fair concordance among different interpretation tools. Conclusion: This Italian survey on NGS resistance testing suggests that viremia levels and HIV subtype affect NGS-GRT coverage. Within the current routine method for NGS-GRT, only mutations with frequency >10% seem reliably detected across different interpretation tools. Full article
(This article belongs to the Special Issue Antiviral Resistance Mutations)
Show Figures

Figure 1

18 pages, 723 KiB  
Article
Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation
by Qin Yang and Young-Chan Lee
Big Data Cogn. Comput. 2024, 8(9), 105; https://doi.org/10.3390/bdcc8090105 - 3 Sep 2024
Viewed by 1160
Abstract
This study investigates the impact of artificial intelligence (AI) on financial inclusion satisfaction and recommendation, with a focus on the ethical dimensions and perceived algorithmic fairness. Drawing upon organizational justice theory and the heuristic–systematic model, we examine how algorithm transparency, accountability, and legitimacy [...] Read more.
This study investigates the impact of artificial intelligence (AI) on financial inclusion satisfaction and recommendation, with a focus on the ethical dimensions and perceived algorithmic fairness. Drawing upon organizational justice theory and the heuristic–systematic model, we examine how algorithm transparency, accountability, and legitimacy influence users’ perceptions of fairness and, subsequently, their satisfaction with and likelihood to recommend AI-driven financial inclusion services. Through a survey-based quantitative analysis of 675 users in China, our results reveal that perceived algorithmic fairness acts as a significant mediating factor between the ethical attributes of AI systems and the user responses. Specifically, higher levels of transparency, accountability, and legitimacy enhance users’ perceptions of fairness, which, in turn, significantly increases both their satisfaction with AI-facilitated financial inclusion services and their likelihood to recommend them. This research contributes to the literature on AI ethics by empirically demonstrating the critical role of transparent, accountable, and legitimate AI practices in fostering positive user outcomes. Moreover, it addresses a significant gap in the understanding of the ethical implications of AI in financial inclusion contexts, offering valuable insights for both researchers and practitioners in this rapidly evolving field. Full article
Show Figures

Figure 1

13 pages, 373 KiB  
Article
Ambient Backscatter-Based User Cooperation for mmWave Wireless-Powered Communication Networks with Lens Antenna Arrays
by Rongbin Guo, Rui Yin, Guan Wang, Congyuan Xu and Jiantao Yuan
Electronics 2024, 13(17), 3485; https://doi.org/10.3390/electronics13173485 - 2 Sep 2024
Viewed by 385
Abstract
With the rapid consumer adoption of mobile devices such as tablets and smart phones, tele-traffic has experienced a tremendous growth, making low-power technologies highly desirable for future communication networks. In this paper, we consider an ambient backscatter (AB)-based user cooperation (UC) scheme for [...] Read more.
With the rapid consumer adoption of mobile devices such as tablets and smart phones, tele-traffic has experienced a tremendous growth, making low-power technologies highly desirable for future communication networks. In this paper, we consider an ambient backscatter (AB)-based user cooperation (UC) scheme for mmWave wireless-powered communication networks (WPCNs) with lens antenna arrays. Firstly, we formulate an optimization problem to maximize the minimum rate of two users by jointly designing power and time allocation. Then, we introduce auxiliary variables and transform the original problem into a convex form. Finally, we propose an efficient algorithm to solve the transformed problem. Simulation results demonstrate that the proposed AB-based UC scheme outperforms the competing schemes, thus improving the fairness performance of throughput in WPCNs. Full article
Show Figures

Figure 1

15 pages, 4268 KiB  
Article
Research on Silage Corn Forage Quality Grading Based on Hyperspectroscopy
by Min Hao, Mengyu Zhang, Haiqing Tian and Jianying Sun
Agriculture 2024, 14(9), 1484; https://doi.org/10.3390/agriculture14091484 - 1 Sep 2024
Viewed by 391
Abstract
Corn silage is the main feed in the diet of dairy cows and other ruminant livestock. Silage corn feed is very susceptible to spoilage and corruption due to the influence of aerobic secondary fermentation during the silage process. At present, silage quality testing [...] Read more.
Corn silage is the main feed in the diet of dairy cows and other ruminant livestock. Silage corn feed is very susceptible to spoilage and corruption due to the influence of aerobic secondary fermentation during the silage process. At present, silage quality testing of corn feed mainly relies on the combination of sensory evaluation and laboratory measurement. The sensory review method is difficult to achieve precision and objectivity, while the laboratory determination method has problems such as cumbersome testing procedures, time-consuming, high cost, and damage to samples. In this study, the external sensory quality grading model for different qualities of silage corn feed was established using hyperspectral data. To explore the feasibility of using hyperspectral data for external sensory quality grading of corn silage, a hyperspectral system was used to collect spectral data of 200 corn silage samples in the 380–1004 nm band, and the samples were classified into four grades: excellent, fair, medium, and spoiled according to the German Agricultural Association (DLG) standard for sensory evaluation of silage samples. Three algorithms were used to preprocess the fodder hyperspectral data, including multiplicative scatter correction (MSC), standard normal variate (SNV), and S–G convolutional smoothing. To reduce the redundancy of the spectral data, variable combination population analysis (VCPA) and competitive adaptive reweighted sampling (CARS) were used for feature wavelength selection, and linear discriminant analysis (LDA) algorithm was used for data dimensionality reduction, constructing random forest classification (RFC), convolutional neural networks (CNN) and support vector machines (SVM) models. The best classification model was derived based on the comparison of the model results. The results show that SNV-LDA-SVM is the optimal algorithm combination, where the accuracy of the calibration set is 99.375% and the accuracy of the prediction set is 100%. In summary, combined with hyperspectral technology, the constructed model can realize the accurate discrimination of the external sensory quality of silage corn feed, which provides a reliable and effective new non-destructive testing method for silage corn feed quality detection. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

25 pages, 6794 KiB  
Article
An Autonomous Intelligent Liability Determination Method for Minor Accidents Based on Collision Detection and Large Language Models
by Junbo Chen, Shunlai Lu and Lei Zhong
Appl. Sci. 2024, 14(17), 7716; https://doi.org/10.3390/app14177716 - 1 Sep 2024
Viewed by 899
Abstract
With the rapid increase in the number of vehicles on the road, minor traffic accidents have become more frequent, contributing significantly to traffic congestion and disruptions. Traditional methods for determining responsibility in such accidents often require human intervention, leading to delays and inefficiencies. [...] Read more.
With the rapid increase in the number of vehicles on the road, minor traffic accidents have become more frequent, contributing significantly to traffic congestion and disruptions. Traditional methods for determining responsibility in such accidents often require human intervention, leading to delays and inefficiencies. This study proposed a fully intelligent method for liability determination in minor accidents, utilizing collision detection and large language models. The approach integrated advanced vehicle recognition using the YOLOv8 algorithm coupled with a minimum mean square error filter for real-time target tracking. Additionally, an improved global optical flow estimation algorithm and support vector machines were employed to accurately detect traffic accidents. Key frames from accident scenes were extracted and analyzed using the GPT4-Vision-Preview model to determine liability. Simulation experiments demonstrated that the proposed method accurately and efficiently detected vehicle collisions, rapidly determined liability, and generated detailed accident reports. The method achieved the fully automated AI processing of minor traffic accidents without manual intervention, ensuring both objectivity and fairness. Full article
Show Figures

Figure 1

14 pages, 1786 KiB  
Article
AI Services-Oriented Dynamic Computing Resource Scheduling Algorithm Based on Distributed Data Parallelism in Edge Computing Network of Smart Grid
by Jing Zou, Peizhe Xin, Chang Wang, Heli Zhang, Lei Wei and Ying Wang
Future Internet 2024, 16(9), 312; https://doi.org/10.3390/fi16090312 - 28 Aug 2024
Viewed by 431
Abstract
Massive computational resources are required by a booming number of artificial intelligence (AI) services in the communication network of the smart grid. To alleviate the computational pressure on data centers, edge computing first network (ECFN) can serve as an effective solution to realize [...] Read more.
Massive computational resources are required by a booming number of artificial intelligence (AI) services in the communication network of the smart grid. To alleviate the computational pressure on data centers, edge computing first network (ECFN) can serve as an effective solution to realize distributed model training based on data parallelism for AI services in smart grid. Due to AI services with diversified types, an edge data center has a changing workload in different time periods. Selfish edge data centers from different edge suppliers are reluctant to share their computing resources without a rule for fair competition. AI services-oriented dynamic computational resource scheduling of edge data centers affects both the economic profit of AI service providers and computational resource utilization. This letter mainly discusses the partition and distribution of AI data based on distributed model training and dynamic computational resource scheduling problems among multiple edge data centers for AI services. To this end, a mixed integer linear programming (MILP) model and a Deep Reinforcement Learning (DRL)-based algorithm are proposed. Simulation results show that the proposed DRL-based algorithm outperforms the benchmark in terms of profit of AI service provider, backlog of distributed model training tasks, running time and multi-objective optimization. Full article
Show Figures

Figure 1

Back to TopTop