Svoboda | Graniru | BBC Russia | Golosameriki | Facebook
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (330)

Search Parameters:
Keywords = RANSAC

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 14870 KiB  
Article
SN-CNN: A Lightweight and Accurate Line Extraction Algorithm for Seedling Navigation in Ridge-Planted Vegetables
by Tengfei Zhang, Jinhao Zhou, Wei Liu, Rencai Yue, Jiawei Shi, Chunjian Zhou and Jianping Hu
Agriculture 2024, 14(9), 1446; https://doi.org/10.3390/agriculture14091446 - 24 Aug 2024
Viewed by 373
Abstract
In precision agriculture, after vegetable transplanters plant the seedlings, field management during the seedling stage is necessary to optimize the vegetable yield. Accurately identifying and extracting the centerlines of crop rows during the seedling stage is crucial for achieving the autonomous navigation of [...] Read more.
In precision agriculture, after vegetable transplanters plant the seedlings, field management during the seedling stage is necessary to optimize the vegetable yield. Accurately identifying and extracting the centerlines of crop rows during the seedling stage is crucial for achieving the autonomous navigation of robots. However, the transplanted ridges often experience missing seedling rows. Additionally, due to the limited computational resources of field agricultural robots, a more lightweight navigation line fitting algorithm is required. To address these issues, this study focuses on mid-to-high ridges planted with double-row vegetables and develops a seedling band-based navigation line extraction model, a Seedling Navigation Convolutional Neural Network (SN-CNN). Firstly, we proposed the C2f_UIB module, which effectively reduces redundant computations by integrating Network Architecture Search (NAS) technologies, thus improving the model’s efficiency. Additionally, the model incorporates the Simplified Attention Mechanism (SimAM) in the neck section, enhancing the focus on hard-to-recognize samples. The experimental results demonstrate that the proposed SN-CNN model outperforms YOLOv5s, YOLOv7-tiny, YOLOv8n, and YOLOv8s in terms of the model parameters and accuracy. The SN-CNN model has a parameter count of only 2.37 M and achieves an [email protected] of 94.6%. Compared to the baseline model, the parameter count is reduced by 28.4%, and the accuracy is improved by 2%. Finally, for practical deployment, the SN-CNN algorithm was implemented on the NVIDIA Jetson AGX Xavier, an embedded computing platform, to evaluate its real-time performance in navigation line fitting. We compared two fitting methods: Random Sample Consensus (RANSAC) and least squares (LS), using 100 images (50 test images and 50 field-collected images) to assess the accuracy and processing speed. The RANSAC method achieved a root mean square error (RMSE) of 5.7 pixels and a processing time of 25 milliseconds per image, demonstrating a superior fitting accuracy, while meeting the real-time requirements for navigation line detection. This performance highlights the potential of the SN-CNN model as an effective solution for autonomous navigation in field cross-ridge walking robots. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

17 pages, 4179 KiB  
Communication
Clique-like Point Cloud Registration: A Flexible Sampling Registration Method Based on Clique-like for Low-Overlapping Point Cloud
by Xinrui Huang, Xiaorong Gao, Jinlong Li and Lin Luo
Sensors 2024, 24(17), 5499; https://doi.org/10.3390/s24175499 - 24 Aug 2024
Viewed by 608
Abstract
Three-dimensional point cloud registration is a critical task in 3D perception for sensors that aims to determine the optimal alignment between two point clouds by finding the best transformation. Existing methods like RANSAC and its variants often face challenges, such as sensitivity to [...] Read more.
Three-dimensional point cloud registration is a critical task in 3D perception for sensors that aims to determine the optimal alignment between two point clouds by finding the best transformation. Existing methods like RANSAC and its variants often face challenges, such as sensitivity to low overlap rates, high computational costs, and susceptibility to outliers, leading to inaccurate results, especially in complex or noisy environments. In this paper, we introduce a novel 3D registration method, CL-PCR, inspired by the concept of maximal cliques and built upon the SC2-PCR framework. Our approach allows for the flexible use of smaller sampling subsets to extract more local consensus information, thereby generating accurate pose hypotheses even in scenarios with low overlap between point clouds. This method enhances robustness against low overlap and reduces the influence of outliers, addressing the limitations of traditional techniques. First, we construct a graph matrix to represent the compatibility relationships among the initial correspondences. Next, we build clique-likes subsets of various sizes within the graph matrix, each representing a consensus set. Then, we compute the transformation hypotheses for the subsets using the SVD algorithm and select the best hypothesis for registration based on evaluation metrics. Extensive experiments demonstrate the effectiveness of CL-PCR. In comparison experiments on the 3DMatch/3DLoMatch datasets using both FPFH and FCGF descriptors, our Fast-CL-PCRv1 outperforms state-of-the-art algorithms, achieving superior registration performance. Additionally, we validate the practicality and robustness of our method with real-world data. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 2794 KiB  
Article
A Study on the 3D Reconstruction Strategy of a Sheep Body Based on a Kinect v2 Depth Camera Array
by Jinxin Liang, Zhiyu Yuan, Xinhui Luo, Geng Chen and Chunxin Wang
Animals 2024, 14(17), 2457; https://doi.org/10.3390/ani14172457 - 23 Aug 2024
Viewed by 325
Abstract
Non-contact measurement based on the 3D reconstruction of sheep bodies can alleviate the stress response in sheep during manual measurement of body dimensions. However, data collection is easily affected by environmental factors and noise, which is not conducive to practical production needs. To [...] Read more.
Non-contact measurement based on the 3D reconstruction of sheep bodies can alleviate the stress response in sheep during manual measurement of body dimensions. However, data collection is easily affected by environmental factors and noise, which is not conducive to practical production needs. To address this issue, this study proposes a non-contact data acquisition system and a 3D point cloud reconstruction method for sheep bodies. The collected sheep body data can provide reference data for sheep breeding and fattening. The acquisition system consists of a Kinect v2 depth camera group, a sheep passage, and a restraining pen, synchronously collecting data from three perspectives. The 3D point cloud reconstruction method for sheep bodies is implemented based on C++ language and the Point Cloud Library (PCL). It processes noise through pass-through filtering, statistical filtering, and random sample consensus (RANSAC). A conditional voxel filtering box is proposed to downsample and simplify the point cloud data. Combined with the RANSAC and Iterative Closest Point (ICP) algorithms, coarse and fine registration are performed to improve registration accuracy and robustness, achieving 3D reconstruction of sheep bodies. In the base, 135 sets of point cloud data were collected from 20 sheep. After 3D reconstruction, the reconstruction error of body length compared to the actual values was 0.79%, indicating that this method can provide reliable reference data for 3D point cloud reconstruction research of sheep bodies. Full article
(This article belongs to the Section Small Ruminants)
Show Figures

Figure 1

29 pages, 5712 KiB  
Article
Advanced Semi-Automatic Approach for Identifying Damaged Surfaces in Cultural Heritage Sites: Integrating UAVs, Photogrammetry, and 3D Data Analysis
by Tudor Caciora, Alexandru Ilieș, Grigore Vasile Herman, Zharas Berdenov, Bahodirhon Safarov, Bahadur Bilalov, Dorina Camelia Ilieș, Ștefan Baias and Thowayeb H. Hassan
Remote Sens. 2024, 16(16), 3061; https://doi.org/10.3390/rs16163061 - 20 Aug 2024
Viewed by 279
Abstract
The analysis and preservation of the cultural heritage sites are critical for maintaining their historical and architectural integrity, as they can be damaged by various factors, including climatic, geological, geomorphological, and human actions. Based on this, the present study proposes a semi-automatic and [...] Read more.
The analysis and preservation of the cultural heritage sites are critical for maintaining their historical and architectural integrity, as they can be damaged by various factors, including climatic, geological, geomorphological, and human actions. Based on this, the present study proposes a semi-automatic and non-learning-based method for detecting degraded surfaces within cultural heritage sites by integrating UAV, photogrammetry, and 3D data analysis. A 20th-century fortification from Romania was chosen as the case study due to its physical characteristics and state of degradation, making it ideal for testing the methodology. Images were collected using UAV and terrestrial sensors and processed to create a detailed 3D point cloud of the site. The developed pipeline effectively identified degraded areas, including cracks and material loss, with high accuracy. The classification and segmentation algorithms, including K-means clustering, geometrical features, RANSAC, and FACETS, improved the detection of destructured areas. The combined use of these algorithms facilitated a detailed assessment of the structural condition. This integrated approach demonstrated that the algorithms have the potential to support each other in minimizing individual limitations and accurately identifying degraded surfaces. Even though some limitations were observed, such as the potential for the overestimation of false negatives and positives areas, the damaged surfaces were extracted with high precision. The methodology proved to be a practical and economical solution for cultural heritage monitoring and conservation, offering high accuracy and flexibility. One of the greatest advantages of the method is its ease of implementation, its execution speed, and the potential of using entirely open-source software. This approach can be easily adapted to various heritage sites, significantly contributing to their protection and valorization. Full article
Show Figures

Figure 1

27 pages, 21706 KiB  
Article
Extraction of River Water Bodies Based on ICESat-2 Photon Classification
by Wenqiu Ma, Xiao Liu and Xinglei Zhao
Remote Sens. 2024, 16(16), 3034; https://doi.org/10.3390/rs16163034 - 18 Aug 2024
Viewed by 565
Abstract
The accurate extraction of river water bodies is crucial for the utilization of water resources and understanding climate patterns. Compared with traditional methods of extracting rivers using remote sensing imagery, the launch of satellite-based photon-counting LiDAR (ICESat-2) provides a novel approach for river [...] Read more.
The accurate extraction of river water bodies is crucial for the utilization of water resources and understanding climate patterns. Compared with traditional methods of extracting rivers using remote sensing imagery, the launch of satellite-based photon-counting LiDAR (ICESat-2) provides a novel approach for river water body extraction. The use of ICESat-2 ATL03 photon data for inland river water body extraction is relatively underexplored and thus warrants investigation. To extract inland river water bodies accurately, this study proposes a method based on the spatial distribution of ATL03 photon data and the elevation variation characteristics of inland river water bodies. The proposed method first applies low-pass filtering to denoised photon data to mitigate the impact of high-frequency signals on data processing. Then, the elevation’s standard deviation of the low-pass-filtered data is calculated via a sliding window, and the photon data are classified on the basis of the standard deviation threshold obtained through Gaussian kernel density estimation. The results revealed that the average overall accuracy (OA) and Kappa coefficient (KC) for the extraction of inland river water bodies across the four study areas were 99.12% and 97.81%, respectively. Compared with the improved RANSAC algorithm and the combined RANSAC and DBSCAN algorithms, the average OA of the proposed method improved by 17.98% and 7.12%, respectively, and the average KC improved by 58.38% and 17.69%, respectively. This study provides a new method for extracting inland river water bodies. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

18 pages, 6660 KiB  
Article
Multi-Source Image Matching Algorithms for UAV Positioning: Benchmarking, Innovation, and Combined Strategies
by Jianli Liu, Jincheng Xiao, Yafeng Ren, Fei Liu, Huanyin Yue, Huping Ye and Yingcheng Li
Remote Sens. 2024, 16(16), 3025; https://doi.org/10.3390/rs16163025 - 18 Aug 2024
Viewed by 460
Abstract
The accuracy and reliability of unmanned aerial vehicle (UAV) visual positioning systems are dependent on the performance of multi-source image matching algorithms. Despite many advancements, targeted performance evaluation frameworks and datasets for UAV positioning are still lacking. Moreover, existing consistency verification methods such [...] Read more.
The accuracy and reliability of unmanned aerial vehicle (UAV) visual positioning systems are dependent on the performance of multi-source image matching algorithms. Despite many advancements, targeted performance evaluation frameworks and datasets for UAV positioning are still lacking. Moreover, existing consistency verification methods such as Random Sample Consensus (RANSAC) often fail to entirely eliminate mismatches, affecting the precision and stability of the matching process. The contributions of this research include the following: (1) the development of a benchmarking framework accompanied by a large evaluation dataset for assessing the efficacy of multi-source image matching algorithms; (2) the results of this benchmarking framework indicate that combinations of multiple algorithms significantly enhance the Match Success Rate (MSR); (3) the introduction of a novel Geographic Geometric Consistency (GGC) method that effectively identifies mismatches within RANSAC results and accommodates rotational and scale variations; and (4) the implementation of a distance threshold iteration (DTI) method that, according to experimental results, achieves an 87.29% MSR with a Root Mean Square Error (RMSE) of 1.11 m (2.22 pixels) while maintaining runtime at only 1.52 times that of a single execution, thus optimizing the trade-off between MSR, accuracy, and efficiency. Furthermore, when compared with existing studies on UAV positioning, the multi-source image matching algorithms demonstrated a sub-meter positioning error, significantly outperforming the comparative method. These advancements are poised to enhance the application of advanced multi-source image matching technologies in UAV visual positioning. Full article
Show Figures

Figure 1

21 pages, 12712 KiB  
Article
A Feature Line Extraction Method for Building Roof Point Clouds Considering the Grid Center of Gravity Distribution
by Jinzheng Yu, Jingxue Wang, Dongdong Zang and Xiao Xie
Remote Sens. 2024, 16(16), 2969; https://doi.org/10.3390/rs16162969 - 13 Aug 2024
Viewed by 426
Abstract
Feature line extraction for building roofs is a critical step in the 3D model reconstruction of buildings. A feature line extraction algorithm for building roof point clouds based on the linear distribution characteristics of neighborhood points was proposed in this study. First, the [...] Read more.
Feature line extraction for building roofs is a critical step in the 3D model reconstruction of buildings. A feature line extraction algorithm for building roof point clouds based on the linear distribution characteristics of neighborhood points was proposed in this study. First, the virtual grid was utilized to provide local neighborhood information for the point clouds, aiding in identifying the linear distribution characteristics of the center of the gravity points on the feature line and determining the potential feature point set in the original point clouds. Next, initial segment elements were selected from the feature point set, and the iterative growth of these initial segment elements was performed by combining the RANSAC linear fitting algorithm with the distance constraint. Compatibility was used to determine the need for merging growing results to obtain roof feature lines. Lastly, according to the distribution characteristics of the original points near the feature lines, the endpoints of the feature lines were determined and optimized. Experiments were conducted using two representative building datasets. The results of the experiments showed that the proposed algorithm could directly extract high-quality roof feature lines from point clouds for both single buildings and multiple buildings. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

19 pages, 8886 KiB  
Article
High-Precision Calibration Method and Error Analysis of Infrared Binocular Target Ranging Systems
by Changwen Zeng, Rongke Wei, Mingjian Gu, Nejie Zhang and Zuoxiao Dai
Electronics 2024, 13(16), 3188; https://doi.org/10.3390/electronics13163188 - 12 Aug 2024
Viewed by 512
Abstract
Infrared binocular cameras, leveraging their distinct thermal imaging capabilities, are well-suited for visual measurement and 3D reconstruction in challenging environments. The precision of camera calibration is essential for leveraging the full potential of these infrared cameras. To overcome the limitations of traditional calibration [...] Read more.
Infrared binocular cameras, leveraging their distinct thermal imaging capabilities, are well-suited for visual measurement and 3D reconstruction in challenging environments. The precision of camera calibration is essential for leveraging the full potential of these infrared cameras. To overcome the limitations of traditional calibration techniques, a novel method for calibrating infrared binocular cameras is introduced. By creating a virtual target plane that closely mimics the geometry of the real target plane, the method refines the feature point coordinates, leading to enhanced precision in infrared camera calibration. The virtual target plane is obtained by inverse projecting the centers of the imaging ellipses, which are estimated at sub-pixel edge, into three-dimensional space, and then optimized using the RANSAC least squares method. Subsequently, the imaging ellipses are inversely projected onto the virtual target plane, where its centers are identified. The corresponding world coordinates of the feature points are then refined through a linear optimization process. These coordinates are reprojected onto the imaging plane, yielding optimized pixel feature points. The calibration procedure is iteratively performed to determine the ultimate set of calibration parameters. The method has been validated through experiments, demonstrating an average reprojection error of less than 0.02 pixels and a significant 24.5% improvement in calibration accuracy over traditional methods. Furthermore, a comprehensive analysis has been conducted to identify the primary sources of calibration error. Ultimately, this achieves an error rate of less than 5% in infrared stereo ranging within a 55-m range. Full article
Show Figures

Figure 1

13 pages, 9589 KiB  
Article
Metrological Analysis with Covariance Features of Micro-Channels Fabricated with a Femtosecond Laser
by Matteo Verdi, Federico Bassi, Luigi Calabrese, Martina Azzolini, Salim Malek, Roberto Battisti, Eleonora Grilli, Fabio Menna, Enrico Gallus and Fabio Remondino
Metrology 2024, 4(3), 398-410; https://doi.org/10.3390/metrology4030024 - 1 Aug 2024
Viewed by 522
Abstract
This study presents an automated methodology for evaluating micro-channels fabricated using a femtosecond laser on stainless steel substrates. We utilize 3D surface topography and metrological analyses to extract geometric features and detect fabrication defects. Standardized samples were analyzed using a light interferometer, and [...] Read more.
This study presents an automated methodology for evaluating micro-channels fabricated using a femtosecond laser on stainless steel substrates. We utilize 3D surface topography and metrological analyses to extract geometric features and detect fabrication defects. Standardized samples were analyzed using a light interferometer, and the resulting data were processed with Principal Component Analysis (PCA) and RANSAC algorithms to derive channel characteristics, such as depth, wall taper, and surface roughness. The proposed method identifies common defects, including bumps and V-defects, which can compromise the functionality of micro-channels. The effectiveness of the approach is validated by comparisons with commercial solutions. This automated procedure aims to enhance the reliability and precision of femtosecond laser micro-milling for industrial applications. The detected defects, combined with fabrication parameters, could be ingested in an AI-based process to optimize fabrication processes. Full article
(This article belongs to the Special Issue Advances in Optical 3D Metrology)
Show Figures

Figure 1

20 pages, 10764 KiB  
Article
Point Cloud Measurement of Rubber Tread Dimension Based on RGB-Depth Camera
by Luobin Huang, Mingxia Chen and Zihao Peng
Appl. Sci. 2024, 14(15), 6625; https://doi.org/10.3390/app14156625 - 29 Jul 2024
Viewed by 482
Abstract
To achieve an accurate measurement of tread size after fixed-length cutting, this paper proposes a point-cloud-based tread size measurement method. Firstly, a mathematical model of corner points and a reprojection error is established, and the optimal solution of the number of corner points [...] Read more.
To achieve an accurate measurement of tread size after fixed-length cutting, this paper proposes a point-cloud-based tread size measurement method. Firstly, a mathematical model of corner points and a reprojection error is established, and the optimal solution of the number of corner points is determined by the non-dominated sorting genetic algorithm II (NSGA-II), which reduces the reprojection error of the RGB-D camera. Secondly, to address the problem of the low accuracy of the traditional pixel metric ratio measurement method, the random sampling consensus point cloud segmentation algorithm (RANSAC) and the oriented bounding box (OBB) collision detection algorithm are introduced to complete the accurate detection of the tread size. By comparing the absolute error and relative error data of several groups of experiments, the accuracy of the detection method in this paper reaches 1 mm, and the measurement deviation is between 0.14% and 2.67%, which is in line with the highest accuracy standard of the national standard. In summary, the RGB-D visual inspection method constructed in this paper has the characteristics of low cost and high inspection accuracy, which is a potential solution to enhance the pickup guidance of tread size measurement. Full article
Show Figures

Figure 1

23 pages, 24773 KiB  
Article
Design and Experiment of Ordinary Tea Profiling Harvesting Device Based on Light Detection and Ranging Perception
by Xiaolong Huan, Min Wu, Xianbing Bian, Jiangming Jia, Chenchen Kang, Chuanyu Wu, Runmao Zhao and Jianneng Chen
Agriculture 2024, 14(7), 1147; https://doi.org/10.3390/agriculture14071147 - 15 Jul 2024
Viewed by 504
Abstract
Due to the complex shape of the tea tree canopy and the large undulation of a tea garden terrain, the quality of fresh tea leaves harvested by existing tea harvesting machines is poor. This study proposed a tea canopy surface profiling method based [...] Read more.
Due to the complex shape of the tea tree canopy and the large undulation of a tea garden terrain, the quality of fresh tea leaves harvested by existing tea harvesting machines is poor. This study proposed a tea canopy surface profiling method based on 2D LiDAR perception and investigated the extraction and fitting methods of canopy point clouds. Meanwhile, a tea profiling harvester prototype was developed and field tests were conducted. The tea profiling harvesting device adopted a scheme of sectional arrangement of multiple groups of profiling tea harvesting units, and each unit sensed the height information of its own bottom canopy area through 2D LiDAR. A cross-platform communication network was established, enabling point cloud fitting of tea plant surfaces and accurate estimation of cutter profiling height through the RANSAC algorithm. Additionally, a sensing control system with multiple execution units was developed using rapid control prototype technology. The results of field tests showed that the bud leaf integrity rate was 84.64%, the impurity rate was 5.94%, the missing collection rate was 0.30%, and the missing harvesting rate was 0.68%. Furthermore, 89.57% of the harvested tea could be processed into commercial tea, with 88.34% consisting of young tea shoots with one bud and three leaves or fewer. All of these results demonstrated that the proposed device effectively meets the technical standards for machine-harvested tea and the requirements of standard tea processing techniques. Moreover, compared to other commercial tea harvesters, the proposed tea profiling harvesting device demonstrated improved performance in harvesting fresh tea leaves. Full article
(This article belongs to the Special Issue Sensor-Based Precision Agriculture)
Show Figures

Figure 1

17 pages, 6246 KiB  
Article
YPL-SLAM: A Simultaneous Localization and Mapping Algorithm for Point–line Fusion in Dynamic Environments
by Xinwu Du, Chenglin Zhang, Kaihang Gao, Jin Liu, Xiufang Yu and Shusong Wang
Sensors 2024, 24(14), 4517; https://doi.org/10.3390/s24144517 - 12 Jul 2024
Viewed by 588
Abstract
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise [...] Read more.
Simultaneous Localization and Mapping (SLAM) is one of the key technologies with which to address the autonomous navigation of mobile robots, utilizing environmental features to determine a robot’s position and create a map of its surroundings. Currently, visual SLAM algorithms typically yield precise and dependable outcomes in static environments, and many algorithms opt to filter out the feature points in dynamic regions. However, when there is an increase in the number of dynamic objects within the camera’s view, this approach might result in decreased accuracy or tracking failures. Therefore, this study proposes a solution called YPL-SLAM based on ORB-SLAM2. The solution adds a target recognition and region segmentation module to determine the dynamic region, potential dynamic region, and static region; determines the state of the potential dynamic region using the RANSAC method with polar geometric constraints; and removes the dynamic feature points. It then extracts the line features of the non-dynamic region and finally performs the point–line fusion optimization process using a weighted fusion strategy, considering the image dynamic score and the number of successful feature point–line matches, thus ensuring the system’s robustness and accuracy. A large number of experiments have been conducted using the publicly available TUM dataset to compare YPL-SLAM with globally leading SLAM algorithms. The results demonstrate that the new algorithm surpasses ORB-SLAM2 in terms of accuracy (with a maximum improvement of 96.1%) while also exhibiting a significantly enhanced operating speed compared to Dyna-SLAM. Full article
Show Figures

Figure 1

17 pages, 5421 KiB  
Article
Application of Micro-Plane Projection Moving Least Squares and Joint Iterative Closest Point Algorithms in Spacecraft Pose Estimation
by Youzhi Li, Yuan Han, Jiaqi Yao, Yanqiu Wang, Fu Zheng and Zhibin Sun
Appl. Sci. 2024, 14(13), 5855; https://doi.org/10.3390/app14135855 - 4 Jul 2024
Viewed by 655
Abstract
Accurately determining the attitude of non-cooperative spacecraft in on-orbit servicing (OOS) has posed a challenge in recent years. In point cloud-based spatial non-cooperative target attitude estimation schemes, high-precision point clouds, which are more robust to noise, can offer more accurate data input for [...] Read more.
Accurately determining the attitude of non-cooperative spacecraft in on-orbit servicing (OOS) has posed a challenge in recent years. In point cloud-based spatial non-cooperative target attitude estimation schemes, high-precision point clouds, which are more robust to noise, can offer more accurate data input for three-dimensional registration. To enhance registration accuracy, we propose a noise filtering method based on moving least squares microplane projection (mpp-MLS). This method retains salient target feature points while eliminating redundant points, thereby enhancing registration accuracy. Higher accuracy in point clouds enables a more precise estimation of spatial target attitudes. For coarse registration, we employed the Random Sampling Consistency (RANSAC) algorithm to enhance accuracy and alleviate the adverse effects of point cloud mismatches. For fine registration, the J-ICP algorithm was utilized to estimate pose transformations and minimize spacecraft cumulative pose estimation errors during movement transformations. Semi-physical experimental results indicate that the proposed attitude parameter measurement method outperformed the classic ICP registration method. It yielded maximum translation and rotation errors of less than 1.57 mm and 0.071°, respectively, and reduced maximum translation and rotation errors by 56% and 65%, respectively, thereby significantly enhancing the attitude estimation accuracy of non-cooperative targets. Full article
Show Figures

Figure 1

18 pages, 4924 KiB  
Article
LOD2-Level+ Low-Rise Building Model Extraction Method for Oblique Photography Data Using U-NET and a Multi-Decision RANSAC Segmentation Algorithm
by Yufeng He, Xiaobian Wu, Weibin Pan, Hui Chen, Songshan Zhou, Shaohua Lei, Xiaoran Gong, Hanzeyu Xu and Yehua Sheng
Remote Sens. 2024, 16(13), 2404; https://doi.org/10.3390/rs16132404 - 30 Jun 2024
Viewed by 648
Abstract
Oblique photography is a regional digital surface model generation technique that can be widely used for building 3D model construction. However, due to the lack of geometric and semantic information about the building, these models make it difficult to differentiate more detailed components [...] Read more.
Oblique photography is a regional digital surface model generation technique that can be widely used for building 3D model construction. However, due to the lack of geometric and semantic information about the building, these models make it difficult to differentiate more detailed components in the building, such as roofs and balconies. This paper proposes a deep learning-based method (U-NET) for constructing 3D models of low-rise buildings that address the issues. The method ensures complete geometric and semantic information and conforms to the LOD2 level. First, digital orthophotos are used to perform building extraction based on U-NET, and then a contour optimization method based on the main direction of the building and the center of gravity of the contour is used to obtain the regular building contour. Second, the pure building point cloud model representing a single building is extracted from the whole point cloud scene based on the acquired building contour. Finally, the multi-decision RANSAC algorithm is used to segment the building detail point cloud and construct a triangular mesh of building components, followed by a triangular mesh fusion and splicing method to achieve monolithic building components. The paper presents experimental evidence that the building contour extraction algorithm can achieve a 90.3% success rate and that the resulting single building 3D model contains LOD2 building components, which contain detailed geometric and semantic information. Full article
Show Figures

Figure 1

15 pages, 4809 KiB  
Article
LiDAR Point Cloud Super-Resolution Reconstruction Based on Point Cloud Weighted Fusion Algorithm of Improved RANSAC and Reciprocal Distance
by Xiaoping Yang, Ping Ni, Zhenhua Li and Guanghui Liu
Electronics 2024, 13(13), 2521; https://doi.org/10.3390/electronics13132521 - 27 Jun 2024
Viewed by 500
Abstract
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution [...] Read more.
This paper proposes a point-by-point weighted fusion algorithm based on an improved random sample consensus (RANSAC) and inverse distance weighting to address the issue of low-resolution point cloud data obtained from light detection and ranging (LiDAR) sensors and single technologies. By fusing low-resolution point clouds with higher-resolution point clouds at the data level, the algorithm generates high-resolution point clouds, achieving the super-resolution reconstruction of lidar point clouds. This method effectively reduces noise in the higher-resolution point clouds while preserving the structure of the low-resolution point clouds, ensuring that the semantic information of the generated high-resolution point clouds remains consistent with that of the low-resolution point clouds. Specifically, the algorithm constructs a K-d tree using the low-resolution point cloud to perform a nearest neighbor search, establishing the correspondence between the low-resolution and higher-resolution point clouds. Next, the improved RANSAC algorithm is employed for point cloud alignment, and inverse distance weighting is used for point-by-point weighted fusion, ultimately yielding the high-resolution point cloud. The experimental results demonstrate that the proposed point cloud super-resolution reconstruction method outperforms other methods across various metrics. Notably, it reduces the Chamfer Distance (CD) metric by 0.49 and 0.29 and improves the Precision metric by 7.75% and 4.47%, respectively, compared to two other methods. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

Back to TopTop