UAV 영상을 활용한 GeoAI 기반 정밀 환경공간정보 구축 = Construction of Precision Environmental Spatial Information Based on Geospatial Artificial Intelligence and Unmanned Aerial Vehicle Imagery
저자
발행사항
창원 : 창원대학교 일반대학원, 2024
학위논문사항
학위논문(박사)-- 창원대학교 일반대학원 : 스마트환경에너지공학과정 2024. 2
발행연도
2024
작성언어
한국어
주제어
발행국(도시)
경상남도
형태사항
172 ; 26 cm
일반주기명
지도교수: 박경훈
UCI식별코드
I804:48019-000000017198
소장기관
The creation of environmental spatial information is critical for the development of local environmental policies, yet it is challenged by limitations in field surveys and spatial data editing. This study aimed to explore methodologies for the production of precise environmental spatial information using UAVs and GeoAI. Focusing on case studies of land cover and individual tree information, methodologies for UAV data collection and GeoAI application were designed and analyzed, considering two different spatial information representation approaches.
The first approach was a continuous field view, which represent the variability of a certain attribute for every location within the target area without gaps. This was applied to land cover information, which requires classifying and representing diverse targets with complex physical characteristics as homogeneous groups. The second approach was a discrete object view, focusing on constructing information for specific objects and considering other areas as empty space. This was applied to individual tree information targeting the construction of data such as Diameter at Breast Height (DBH), crown area, and tree height, essential for estimating carbon absorption and storage capacities.
Land cover information was collected using various sensors mounted on UAVs, including multispectral and thermal infrared cameras. These sensors facilitated the acquisition of image data that reveals the physical characteristics of the land cover. Tree information was obtained using UAV-based LiDAR sensors, capturing 3D positional data that illustrate the morphological characteristics of trees, along with RGB images. To address the complexities arising from high-resolution images, object-based image analysis and deep learning techniques, as part of GeoAI methods, were applied, yielding the following analysis results:
The UAV-based spectral and geometric imagery collected were analyzed for their characteristics and classification contributions of each land cover category. nDSM, NDVI, SAVI, and LST images were identified as having significant importance in variable analysis. The combination of nDSM, NDVI, and SAVI was found to to be effective in classifying buildings, trees, and grass. LST were crucial for classifying car roads, pedestrian paths, and bare soil. Additionally, the thermal capacity characteristics based on the cover can be considered as potential foundational images for urban land cover classification.
Further, the performance of models for object-based image analysis (OBIA-RF, OBIA-ANN) and deep learning-based semantic segmentation (U-Net) was compared. The U-Net model demonstrated superior performance in land cover classification with an overall accuracy of 93.0%. The effectiveness of the U-Net model was attributed to its use of 2D images as input data that incorporate spatial information alongside spectral data. Although the OBIA-RF and -ANN models showed slightly lower overall accuracy than the U-Net model, they exhibited over 94% accuracy and reliability in classifying buildings and trees. The U-Net model required a substantial training duration of 25 hours, whereas the OBIA-RF and -ANN models completed training within an hour. For classifications focusing solely on buildings or tree areas, object-based image analysis techniques were deemed more efficient than deep learning methods, which necessitate extensive training time.
Regarding individual tree information, individual tree areas were identified using a deep learning-based object segmentation technique, with attributes such as crown area, height, and DBH subsequently established. To facilitate the application of the deep learning model, a method for constructing training data for individual tree detection was proposed, utilizing object-based image analysis and watershed segmentation techniques. Despite limitations in individual tree detection methods potentially leading to the subdivision of a single tree area, this approach was anticipated to overcome the boundaries posed by traditional field surveys and vectorizing, especially in densely forested areas. Consequently, this strategy could be likely to serve as an essential link for the application of deep learning techniques that require extensive training data.
The Mask R-CNN model, a deep learning-based object segmentation technique, was trained with the individual tree areas that had been prepared as training data. The results demonstrated stable accuracy in individual tree detection, achieving F1 scores of 0.9583 at an IoU threshold of 50% and 0.8802 at 75%. Subsequently, structural information of trees, such as crown area and height, was calculated using individual tree detection results and UAV images. These were utilized as independent variables in a regression model, which predicted DBH with an RMSE of 1.86cm and a coefficient of determination of 0.73. Although UAV images may offer slightly reduced accuracy compared to field surveys, their use was considered effective for efficiently constructing individual tree information across extensive areas for the estimation of carbon absorption and storage.
In conclusion, the case study involving land cover and individual trees demonstrated that its findings could be extended as a methodology for transforming various environmental data into spatial information. Through integrating UAVs for data collection and GeoAI for information extraction, the study proposed an automated approach for constructing precise environmental spatial information, thereby reducing the need for extensive human involvement in tasks like field surveys and vectorizing. This has the potential to contribute significantly to the development of sustainable methods for monitoring environmental spatial information.
분석정보
서지정보 내보내기(Export)
닫기소장기관 정보
닫기권호소장정보
닫기오류접수
닫기오류 접수 확인
닫기음성서비스 신청
닫기음성서비스 신청 확인
닫기이용약관
닫기학술연구정보서비스 이용약관 (2017년 1월 1일 ~ 현재 적용)
학술연구정보서비스(이하 RISS)는 정보주체의 자유와 권리 보호를 위해 「개인정보 보호법」 및 관계 법령이 정한 바를 준수하여, 적법하게 개인정보를 처리하고 안전하게 관리하고 있습니다. 이에 「개인정보 보호법」 제30조에 따라 정보주체에게 개인정보 처리에 관한 절차 및 기준을 안내하고, 이와 관련한 고충을 신속하고 원활하게 처리할 수 있도록 하기 위하여 다음과 같이 개인정보 처리방침을 수립·공개합니다.
주요 개인정보 처리 표시(라벨링)
목 차
3년
또는 회원탈퇴시까지5년
(「전자상거래 등에서의 소비자보호에 관한3년
(「전자상거래 등에서의 소비자보호에 관한2년
이상(개인정보보호위원회 : 개인정보의 안전성 확보조치 기준)개인정보파일의 명칭 | 운영근거 / 처리목적 | 개인정보파일에 기록되는 개인정보의 항목 | 보유기간 | |
---|---|---|---|---|
학술연구정보서비스 이용자 가입정보 파일 | 한국교육학술정보원법 | 필수 | ID, 비밀번호, 성명, 생년월일, 신분(직업구분), 이메일, 소속분야, 웹진메일 수신동의 여부 | 3년 또는 탈퇴시 |
선택 | 소속기관명, 소속도서관명, 학과/부서명, 학번/직원번호, 휴대전화, 주소 |
구분 | 담당자 | 연락처 |
---|---|---|
KERIS 개인정보 보호책임자 | 정보보호본부 김태우 | - 이메일 : lsy@keris.or.kr - 전화번호 : 053-714-0439 - 팩스번호 : 053-714-0195 |
KERIS 개인정보 보호담당자 | 개인정보보호부 이상엽 | |
RISS 개인정보 보호책임자 | 대학학술본부 장금연 | - 이메일 : giltizen@keris.or.kr - 전화번호 : 053-714-0149 - 팩스번호 : 053-714-0194 |
RISS 개인정보 보호담당자 | 학술진흥부 길원진 |
자동로그아웃 안내
닫기인증오류 안내
닫기귀하께서는 휴면계정 전환 후 1년동안 회원정보 수집 및 이용에 대한
재동의를 하지 않으신 관계로 개인정보가 삭제되었습니다.
(참조 : RISS 이용약관 및 개인정보처리방침)
신규회원으로 가입하여 이용 부탁 드리며, 추가 문의는 고객센터로 연락 바랍니다.
- 기존 아이디 재사용 불가
휴면계정 안내
RISS는 [표준개인정보 보호지침]에 따라 2년을 주기로 개인정보 수집·이용에 관하여 (재)동의를 받고 있으며, (재)동의를 하지 않을 경우, 휴면계정으로 전환됩니다.
(※ 휴면계정은 원문이용 및 복사/대출 서비스를 이용할 수 없습니다.)
휴면계정으로 전환된 후 1년간 회원정보 수집·이용에 대한 재동의를 하지 않을 경우, RISS에서 자동탈퇴 및 개인정보가 삭제처리 됩니다.
고객센터 1599-3122
ARS번호+1번(회원가입 및 정보수정)