Structured feature selection for explainable prediction models
저자
발행사항
Seoul : Graduate School, Korea University, 2020
학위논문사항
학위논문(박사)-- 고려대학교 대학원: 산업경영공학과 2020. 8
발행연도
2020
작성언어
영어
주제어
발행국(도시)
서울
형태사항
xv, 207장 : 삽화, 도표 ; 26 cm
일반주기명
지도교수: 김성범
참고문헌: 장 186-207
UCI식별코드
I804:11009-000000231956
DOI식별코드
소장기관
With recent advances in various technology regarding hardware and software for data collection, storage, and wireless communication, diverse domains are facing large-scale data more than ever. Data collected from various fields contain lots of valuable information about the states, events, and operation of the systems and relationships among their components. To extract valuable information from large-scale data, machine learning algorithms are widely used. However, because of the massive datasets collected from various sources, modern machine learning algorithms suffer from a significant amount of irrelevant and redundant information. Thus, unearthing the valuable information that takes a small proportion of the massive datasets has become important.
One way to handle such massive datasets with a large amount of redundant and irrelevant information is a feature selection. Feature selection is a method for identifying a small subset of features that are mostly related to predictive models. In general, feature selection methods provide a simple but informative model. Besides, they can tackle overfitting problems and help the prediction models to achieve better generalization.
In datasets generated in various fields, features may have a specific structure. Sometimes, the experts in the field are aware of such structures. Feature selection methods can be improved by incorporating the structure information. In particular, combining the structure information and feature selection can improve the prediction performance of the model as well as the quality of selected feature sets. It is because some valuable information can be revealed by considering structure information that is not available in the dataset. This thesis is aimed at establishing structured feature selection methods combined with prediction models that provide better predictive performance as well as explainability.
This thesis considers two different approaches to explainable prediction models. One is a structured feature selection in a linear regression model based on discrete optimization and another is a neural network equipped with a hierarchical attention mechanism. Even though two approaches seem quite different, they both consider the structure of the inputs and utilize only a part of them for the prediction. Thus, the methods try to find an important subset of the inputs considering the structure. The resulting subset of the inputs, which are mainly used for the prediction, are easier to understand and explain compared to the feature selection results without considering the structures.
First, I propose a method for feature selection and regression coefficient estimation in linear regression models that incorporates a graph structure of the features. The proposed method is based on the cardinality constraint that controls the number of selected variables and the graph structured subset constraint that encourages the features adjacent in the graph to be simultaneously selected or removed from the model. Moreover, I develop an efficient discrete projected gradient descent method to tackle the NP-hardness of the problem originating from the discrete decision variables. Numerical experiments on simulated and real-world data are conducted to demonstrate the usefulness and applicability of the proposed method by comparing it with existing graph regularization methods in terms of predictive accuracy and feature selection performance. The results confirm that the proposed method outperforms the existing methods.
Second, I propose a subset selection method to eliminate the correlation among the features in linear regression models. The proposed method has a combination of cardinality constraint that controls the number of selected features and the correlation conflict constraint that prevents a pair of highly correlated features are selected together. Furthermore, I develop a projected gradient descent method to handle the computational difficulty originating from the discreteness of the constraints. Experiments on simulated and real-world data are provided to show the usefulness and applicability of the proposed method. In the experiments, I compare the proposed method with existing regularized linear models equipped with grouping effects. The comparison confirms that the proposed method can handle the correlation among features in a different way.
Finally, this thesis proposes an explainable neural network for multiple wafer bin maps classification problems. In the semiconductor manufacturing processes, a wafer bin map (WBM) represents the results of electrical tests. In WBMs, defective dies often form specific local patterns; such patterns are usually caused by the failure of specific processes or equipment. Thus, identifying the local patterns is crucial to finding the processes or equipment responsible for the fault. Various statistical and machine learning methods have been developed for WBM classification; however, most of the existing studies considered single WBMs. In this thesis, I propose an explainable neural network for multiple WBMs classification, named a hierarchical spatial-test attention network. The proposed method has a hierarchical structure that reflects the characteristics of multiple WBMs. We apply two levels of attention mechanisms to the spatial- and test-levels, allowing the model to differently attend more and less important parts when classifying WBMs. Furthermore, I propose a spatial attention probability conveyance mechanism and test-level attention entropy penalty to improve the classification performance and interpretability of the proposed method. To demonstrate the usefulness and applicability of the proposed method, I verified it on a real-world WBM dataset, WM-811K. The results confirmed that the proposed method could accurately classify defect patterns while correctly identifying the test and location of defect patterns.
분석정보
서지정보 내보내기(Export)
닫기소장기관 정보
닫기권호소장정보
닫기오류접수
닫기오류 접수 확인
닫기음성서비스 신청
닫기음성서비스 신청 확인
닫기이용약관
닫기학술연구정보서비스 이용약관 (2017년 1월 1일 ~ 현재 적용)
학술연구정보서비스(이하 RISS)는 정보주체의 자유와 권리 보호를 위해 「개인정보 보호법」 및 관계 법령이 정한 바를 준수하여, 적법하게 개인정보를 처리하고 안전하게 관리하고 있습니다. 이에 「개인정보 보호법」 제30조에 따라 정보주체에게 개인정보 처리에 관한 절차 및 기준을 안내하고, 이와 관련한 고충을 신속하고 원활하게 처리할 수 있도록 하기 위하여 다음과 같이 개인정보 처리방침을 수립·공개합니다.
주요 개인정보 처리 표시(라벨링)
목 차
3년
또는 회원탈퇴시까지5년
(「전자상거래 등에서의 소비자보호에 관한3년
(「전자상거래 등에서의 소비자보호에 관한2년
이상(개인정보보호위원회 : 개인정보의 안전성 확보조치 기준)개인정보파일의 명칭 | 운영근거 / 처리목적 | 개인정보파일에 기록되는 개인정보의 항목 | 보유기간 | |
---|---|---|---|---|
학술연구정보서비스 이용자 가입정보 파일 | 한국교육학술정보원법 | 필수 | ID, 비밀번호, 성명, 생년월일, 신분(직업구분), 이메일, 소속분야, 웹진메일 수신동의 여부 | 3년 또는 탈퇴시 |
선택 | 소속기관명, 소속도서관명, 학과/부서명, 학번/직원번호, 휴대전화, 주소 |
구분 | 담당자 | 연락처 |
---|---|---|
KERIS 개인정보 보호책임자 | 정보보호본부 김태우 | - 이메일 : lsy@keris.or.kr - 전화번호 : 053-714-0439 - 팩스번호 : 053-714-0195 |
KERIS 개인정보 보호담당자 | 개인정보보호부 이상엽 | |
RISS 개인정보 보호책임자 | 대학학술본부 장금연 | - 이메일 : giltizen@keris.or.kr - 전화번호 : 053-714-0149 - 팩스번호 : 053-714-0194 |
RISS 개인정보 보호담당자 | 학술진흥부 길원진 |
자동로그아웃 안내
닫기인증오류 안내
닫기귀하께서는 휴면계정 전환 후 1년동안 회원정보 수집 및 이용에 대한
재동의를 하지 않으신 관계로 개인정보가 삭제되었습니다.
(참조 : RISS 이용약관 및 개인정보처리방침)
신규회원으로 가입하여 이용 부탁 드리며, 추가 문의는 고객센터로 연락 바랍니다.
- 기존 아이디 재사용 불가
휴면계정 안내
RISS는 [표준개인정보 보호지침]에 따라 2년을 주기로 개인정보 수집·이용에 관하여 (재)동의를 받고 있으며, (재)동의를 하지 않을 경우, 휴면계정으로 전환됩니다.
(※ 휴면계정은 원문이용 및 복사/대출 서비스를 이용할 수 없습니다.)
휴면계정으로 전환된 후 1년간 회원정보 수집·이용에 대한 재동의를 하지 않을 경우, RISS에서 자동탈퇴 및 개인정보가 삭제처리 됩니다.
고객센터 1599-3122
ARS번호+1번(회원가입 및 정보수정)