Deep Learning Networks and ICT-based Plant Disease and Animal Activity Detection System for Digital Agriculture
저자
발행사항
진주 : 경상국립대학교 대학원, 2022
학위논문사항
학위논문(박사)-- 경상국립대학교 대학원 : 바이오시스템공학과 바이오시스템공학과 2022. 2
발행연도
2022
작성언어
영어
주제어
발행국(도시)
경상남도
형태사항
xvii, 94 p. ; 26 cm
일반주기명
지도교수: 김현태
UCI식별코드
I804:48003-000000031241
소장기관
The majority of food for human beings comes from agriculture. Recently, farmers have had significant pressure to fulfill the rising demand for agricultural products with the increased world population. However, various factors such as catastrophic diseases, urbanization, and climate changes limit agriculture production. Moreover, conventional and subsistence farming cannot meet the increased global food requirement. In this context, it is of utmost necessity to apply the latest technology and tools in agriculture for food safety and production increment. Therefore, the conventional farming concept has been quickly transitioning into digital farming. The main objective of this study was to implement deep learning networks and information and communication technology (ICT) to detect plant diseases, segment and measure disease severity, and detect animal activity. The varieties of deep convolutional neural networks were applied and evaluated their performances. This study has been divided broadly into two parts. The first part deals with the tomato disease classification using lightweight attention-based convolutional neural networks and strawberry gray mold disease segmentation and severity measurement. Likewise, the second part contains the pig posture and locomotion activity detection system using the deep learning-based object detection models and tracking algorithm.
Two experiments were conducted on plant disease classification and segmentation and one on pig posture and walking activity detection. In the first experiment of plant disease identification, ten varieties of tomato diseases and healthy leaves images were collected from both the open-source database and the glasshouse located at Gyeongsang National University. A lightweight attention-based deep convolutional neural network (ACNN) was designed to improve the performance of the model for plant disease classification. The total images were divided into training, testing, and validation datasets at a ratio of 8:1:1. Then the performance of the proposed model was compared with the baseline CNN without attention (WACNN) and the standard ResNet50 model. In the second experiment, three concentrations of Botrytis cinerea (causal agent of gray mold disease) were inoculated to the strawberry plants at an early reproductive stage. The occurrence of disease spots on the leaves and their expansion were recorded using a handheld RGB camera daily non-invasively. The raw images were pre-processed to remove clutter background and to extract the target leaf only. Then a deep CNN-based pixel-level segmentation Unet model was designed, trained, tested, and validated using the pre-processed images. The performance of the deep learning model was calculated using the standard segmentation metrics (pixel accuracy, intersection over union (IoU) accuracy, and dice accuracy) and validated using the 5-fold cross-validation method. Moreover, the performance of the Unet model was compared with the XGboost and K-means machine learning models and an image processing algorithm. The disease severity is calculated by using the percentage of diseased pixels in a leaf.
The results of tomato disease classification showed that the deep CNN with attention mechanism improved by 1.2% in the tomato disease classification accuracy compared to CNN without attention mechanism in compensation of a few more network parameters and complexity. The CNN without attention module extracts the global features from the whole image. However, the characteristics of the diseased regions would be more specific to an individual disease class. Therefore, the attention module emphasizes regional features rather than global features. Thus, boosting the disease classification accuracy of the model. Whereas, in terms of gray mold disease segmentation, the average pixel, dice, and IoU accuracies of 98.24%, 89.71%, and 82.12%, respectively, were achieved from the Unit model, followed by XGBoost (98.06%, 87.76%, and 80.12%) on 80 test images. Results showed that the Unit model surpasses the conventional XGBoost, K-means, and image processing technique in detecting and quantifying the gray mold disease. The Unit model has two encoder and decoder blocks without fully connected networks. Thus, the network parameters reduce considerably, allowing the model to converge even in a small number of training datasets. Moreover, the Unet model provided a disease segmented image of the same size as the input image due to implementing an up-converter block.
For the pig posture and walking activity detection, an experiment was conducted in the experimental pig barn located in Gyeongsang National University. The concentration of greenhouse gases (GHGs) was elevated by closing the ventilator and door of the pig barn for an hour three times a day (morning, day, and night), and the treatment was repeated for three days. The GHGs concentration before, after, and after an hour of treatment was measured by taking air samples from three spatial locations near the center of the house and analyzed using gas chromatography (GC). The livestock environment monitoring system (LEMS) collected the other environmental data (temperature and humidity), including CO2 gas. A top view network camera (HIK VISION) was installed to record the videos of pig activities and stored them in a network video recorder (NVR). A total of 6,012 frames from the video were labeled manually using the computer vision annotation tool (CVAT) and split into training and testing datasets (9:1). Three variances of object detection models (YOLOv4, Faster R-CNN, and SSD ResNet) were trained and validated to detect pig postures and walking activity. Then the deep association simple online real-time tracking algorithm (Deep SORT) was implemented to track the individual pig in the video clips. Pig postures and walking activity information was extracted from the one hour before, during, and after the treatment periods and analyzed the changes in activity due to the compromised environment. The pigs' standing, walking, and sternal lying activities reduce with increased GHGs, increasing lateral lying posture duration. Also, the pigs were more active in the morning than daytime and the least in the nighttime. Moreover, the pig posture detection performances of the object detection models were evaluated using the average precision (AP) and mean AP (mAP) and found the YOLO model provided the highest mAP of 98.67%, followed by the Faster R-CNN model (96.42%). Furthermore, the YOLO model outperformed in terms of detection speed (0.031 s/frame), followed by the SSD model (0.123 s/frame) and the Faster R-CNN model (0.15 s/frame). Therefore, the deep learning networks showed that they could effectively solve complex agricultural problems. However, more researches are recommended for further improvement.
Finally, a web-based client-server architecture (http://sfsl.gnu.ac.kr) was designed to automatically collect the environmental and image data from the experimental sites. Similarly, a multi-user python interactive program called JupyterHub was installed on the server (https://sfslws.gnu.ac.kr), allowing the deep learning models to run in the cloud.
분석정보
서지정보 내보내기(Export)
닫기소장기관 정보
닫기권호소장정보
닫기오류접수
닫기오류 접수 확인
닫기음성서비스 신청
닫기음성서비스 신청 확인
닫기이용약관
닫기학술연구정보서비스 이용약관 (2017년 1월 1일 ~ 현재 적용)
학술연구정보서비스(이하 RISS)는 정보주체의 자유와 권리 보호를 위해 「개인정보 보호법」 및 관계 법령이 정한 바를 준수하여, 적법하게 개인정보를 처리하고 안전하게 관리하고 있습니다. 이에 「개인정보 보호법」 제30조에 따라 정보주체에게 개인정보 처리에 관한 절차 및 기준을 안내하고, 이와 관련한 고충을 신속하고 원활하게 처리할 수 있도록 하기 위하여 다음과 같이 개인정보 처리방침을 수립·공개합니다.
주요 개인정보 처리 표시(라벨링)
목 차
3년
또는 회원탈퇴시까지5년
(「전자상거래 등에서의 소비자보호에 관한3년
(「전자상거래 등에서의 소비자보호에 관한2년
이상(개인정보보호위원회 : 개인정보의 안전성 확보조치 기준)개인정보파일의 명칭 | 운영근거 / 처리목적 | 개인정보파일에 기록되는 개인정보의 항목 | 보유기간 | |
---|---|---|---|---|
학술연구정보서비스 이용자 가입정보 파일 | 한국교육학술정보원법 | 필수 | ID, 비밀번호, 성명, 생년월일, 신분(직업구분), 이메일, 소속분야, 웹진메일 수신동의 여부 | 3년 또는 탈퇴시 |
선택 | 소속기관명, 소속도서관명, 학과/부서명, 학번/직원번호, 휴대전화, 주소 |
구분 | 담당자 | 연락처 |
---|---|---|
KERIS 개인정보 보호책임자 | 정보보호본부 김태우 | - 이메일 : lsy@keris.or.kr - 전화번호 : 053-714-0439 - 팩스번호 : 053-714-0195 |
KERIS 개인정보 보호담당자 | 개인정보보호부 이상엽 | |
RISS 개인정보 보호책임자 | 대학학술본부 장금연 | - 이메일 : giltizen@keris.or.kr - 전화번호 : 053-714-0149 - 팩스번호 : 053-714-0194 |
RISS 개인정보 보호담당자 | 학술진흥부 길원진 |
자동로그아웃 안내
닫기인증오류 안내
닫기귀하께서는 휴면계정 전환 후 1년동안 회원정보 수집 및 이용에 대한
재동의를 하지 않으신 관계로 개인정보가 삭제되었습니다.
(참조 : RISS 이용약관 및 개인정보처리방침)
신규회원으로 가입하여 이용 부탁 드리며, 추가 문의는 고객센터로 연락 바랍니다.
- 기존 아이디 재사용 불가
휴면계정 안내
RISS는 [표준개인정보 보호지침]에 따라 2년을 주기로 개인정보 수집·이용에 관하여 (재)동의를 받고 있으며, (재)동의를 하지 않을 경우, 휴면계정으로 전환됩니다.
(※ 휴면계정은 원문이용 및 복사/대출 서비스를 이용할 수 없습니다.)
휴면계정으로 전환된 후 1년간 회원정보 수집·이용에 대한 재동의를 하지 않을 경우, RISS에서 자동탈퇴 및 개인정보가 삭제처리 됩니다.
고객센터 1599-3122
ARS번호+1번(회원가입 및 정보수정)