Posted in | News | Optics and Photonics

LiDAR and Optical Remote Sensing Improve 3D Virtual Reality and Digital Twin Cities Production

In an article published in the journal Electronics, researchers analyzed the object classification methods based on data fusion of airborne light detection and ranging (LiDAR) point clouds and optical remote sensing images.

Study: Systematic Comparison of Objects Classification Methods Based on ALS and Optical Remote Sensing Images in Urban Areas. Image Credit: Jackie Niam/Shutterstock.com

The precise categorization and extraction of feature items in urban regions is a critical topic that requires urgent attention for urban 3D scene modeling, digital twin city, and urban resource management.

Light Detection and Ranging Technology

Airborne LiDAR technology is widely used in mapping, agriculture, military, and other industries. It can quickly and directly acquire high-precision 3D information on surface objects in urban areas. It has grown to be a significant source of remote sensing data and is now widely applied to the detection and planning of urban roads. Many academics utilize photos to extract buildings and roads from remote sensing data, a crucial data source for feature classification and extraction.

Homogeneous Data

The homogeneous data often utilized for the present objective categorization of urban scenes has the benefit of having the same information storage structure and can be combined with the source data during the pre-processing step. However, since urban environments are often complex and dynamic, achieving all the criteria for a single sensor might be challenging when using remote sensing data for urban applications. Therefore, getting all the necessary data for feature extraction and classification from a single sensor is not possible.

Previous Studies

Through previous studies, it can be seen that multi-objective segmentation and classification in urban areas using only LiDAR data are prone to misclassification, which can cause issues such as features that are similar in 3D morphology not being identified.

This is due to the complexity of airborne LiDAR point cloud data due to its scene and irregular spatial distribution.

The constraint of a single data source is that these studies often refine their algorithms or create more complicated algorithms depending on the actual data to acquire more accurate objective classification findings. As a result, some researchers use additional data sources to identify point cloud data or remote sensing picture data.

Improving the Classification Extraction Accuracy of Urban Scene Objects

In this study, feature extraction combines the 2D spectral and textural information of images with the 3D information and spatial structure information of point clouds to increase the classification extraction accuracy of urban scene objects based on airborne LiDAR point clouds and high-resolution optical remote sensing images.

It also validates the significance of appropriate classifiers, the superiority of multi-source data fusion over single-source data for multi-object targets’ extraction in urban scenes, the requirement for optimizing feature combinations, and analyzes classifier and sampling ratio parameters to provide an adaptable framework for the multi-classification problem in urban scenes.

How the Study was Conducted

The technical route procedure of this work primarily consists of key steps, including feature extraction and fusion, feature set design, feature fusion optimization, sample extraction, model setup, and classification findings.

This research sought to classify objectives to categorize point clouds in metropolitan regions, which is a multi-classification issue since the data sets were not linearly related. In the end, Decision Tree (DT), Random Forest (RF), and Support Vector Machine (SVM) models were chosen as the classifiers for point clouds in metropolitan regions in this research based on the relative merits of these models.

Significant Findings of the Study

This study used high-resolution optical image data and LiDAR point cloud data as research objects. First, three models of SVM, DT and RF were trained using single and multi-source feature sets. These trained models were then used to classify the point cloud data for the common urban areas of low vegetation, impervious surfaces, buildings, and trees.

It was observed that machine learning classification based on multi-source data performed better than single-source data when the same amount of samples and the same model were used. The tests on feature selection showed that specific characteristics have a common function in the classification of point clouds and that the classification outcomes of various classifiers varied significantly.

The SVM classifier produced the poorest result, while the RF classifier produced the best. The total classification accuracy of the three classifiers increased positively as the sample size increased. The random forest model was shown to be more effective for classifying point cloud data regarding classification accuracy and processing time in this research.

Future Prospects

While there are certain discrepancies in how urban feature types are represented in point cloud data, particularly in geographic distributions, picture data provides rich texture and spectral information that point cloud data lacks. The combination of the two characteristics may significantly increase classification accuracy.

In the future, the accuracy of point cloud classification may be further enhanced, and finer feature extraction may be possible by adding additional point cloud features that are particular to certain feature types or richer multispectral features. Future research can examine point cloud categorization techniques using merging additional multi-source data due to technological advancements.

Reference

Hengfan Cai, Yanjun Wang , Yunhao Lin, Shaochun Li, Mengjie Wang, & Fei Teng. (2022) Systematic Comparison of Objects Classification Methods Based on ALS and Optical Remote Sensing Images in Urban Areas. Electronics. https://www.mdpi.com/2079-9292/11/19/3041/htm

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Taha Khan

Written by

Taha Khan

Taha graduated from HITEC University Taxila with a Bachelors in Mechanical Engineering. During his studies, he worked on several research projects related to Mechanics of Materials, Machine Design, Heat and Mass Transfer, and Robotics. After graduating, Taha worked as a Research Executive for 2 years at an IT company (Immentia). He has also worked as a freelance content creator at Lancerhop. In the meantime, Taha did his NEBOSH IGC certification and expanded his career opportunities.  

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Khan, Taha. (2022, September 27). LiDAR and Optical Remote Sensing Improve 3D Virtual Reality and Digital Twin Cities Production. AZoOptics. Retrieved on May 18, 2024 from https://www.azooptics.com/News.aspx?newsID=27959.

  • MLA

    Khan, Taha. "LiDAR and Optical Remote Sensing Improve 3D Virtual Reality and Digital Twin Cities Production". AZoOptics. 18 May 2024. <https://www.azooptics.com/News.aspx?newsID=27959>.

  • Chicago

    Khan, Taha. "LiDAR and Optical Remote Sensing Improve 3D Virtual Reality and Digital Twin Cities Production". AZoOptics. https://www.azooptics.com/News.aspx?newsID=27959. (accessed May 18, 2024).

  • Harvard

    Khan, Taha. 2022. LiDAR and Optical Remote Sensing Improve 3D Virtual Reality and Digital Twin Cities Production. AZoOptics, viewed 18 May 2024, https://www.azooptics.com/News.aspx?newsID=27959.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.