파라메트릭 활성함수를 이용한 딥러닝의 성능 향상 연구 = A study on the performance improvement of deep learning using parametric activation function
Artificial neural networks solve problems in various fields through linear and non-linear transformations. The activation function responsible for the nonlinear transformation is a major factor in the performance of artificial neural networks. In this study, the performance of artificial neural networks could be improved by proposing a parametric activation function related to the loss function.
A deep neural network with multiple hidden layers uses a method to solve problems by repeatedly applying the affine transform, which is a linear model, and the activation function, which is a non-linear transform. The Sigmoid activation function has a non-linear transformation characteristic and a good characteristic of convergence at a specific point, but a solution to the computation cost and gradient vanishing problem is needed. The most commonly used ReLU activation function is fast, but it requires a linear transformation characteristic according to the certain section and a solution for the section where the result value is 0. Many of the activation functions currently used are using the ReLU activation function and the Sigmoid activation function with various modifications.
The parametric activation function was proposed by expanding the meaning of the activation function by finding the problems of the conventional activation function.
The first problem with the conventional activation function is that it is a nonlinear transformation that is not related to the direction of minimizing the loss function. The second is the problem of gradient vanishing, which is one of the main problems of artificial neural networks.
Unlike conventional activation functions, parametric activation function trained in the direction of decreasing the loss function to calculate the differential coefficient of the loss function for the parameter and optimize it to reduce the loss function value during error backpropagation. This ensure the performance of artificial neural networks. By introducing parameters that can transform input data into various forms, we give degrees of freedom to transform the activation function.
To check the performance of the parametric activation function, a deep neural network and a convolutional neural network were developed and parametric Sigmoid and parametric ReLU functions were applied to the XOR problem and MNIST data.
The first experiment was applied to the XOR problem using a network with a minimum number of hidden layers and a minimum number of nodes. The XOR problem is the simplest nonlinear problem that cannot be solved linearly. A model with 1 hidden layer and 2 nodes in the hidden layer was applied. As a result of the experiment, in such a very simple model, ReLU frequently diverges without learning, and the Sigmoid function was stably executed. In addition, it was confirmed that the parametric Sigmoid and parametric ReLU activation functions showed better performance compared to the conventional Sigmoid and conventional ReLU activation functions.
The second experiment confirmed the performance by applying MNIST data to deep neural network models with four different structures according to the number of hidden layers and the number of nodes in the hidden layers. Parametric activation function was applied to 4 models with 1, 3 hidden layers and 20, 200 hidden layer nodes. As a result of the experiment, both the parametric Sigmoid and parametric ReLU activation functions showed better performance than the conventional functions.
The third experiment confirmed the performance of parametric Sigmoid and parametric ReLU activation functions using MNIST data in a convolutional neural network. Convolutional neural networks use activation functions in the convolutional layer and the fully connected layer. The convolutional neural network used in the experiment used 3 convolutional layers, 1 fully connected layer, and 16, 8, and 4 filters in each convolutional layer, and the number of nodes in the fully connected layer is 30. It was found that the performance was improved by constantly optimizing the parameters of the parametric activation function in the direction of minimizing the loss function.
Through experiments with various models, it was confirmed that the parametric activation function showed better performance compared to the conventional ReLU and conventional Sigmoid activation functions by non-linear transformation in the direction of continuously decreasing the loss function value at each step.
The parametric activation function is meaningful in that it can contribute to the performance improvement of deep learning and expands the meaning of the activation function used in the existing artificial neural network.
In deep learning, if a parametric activation function related to the loss function is applied, so many hidden layers, nodes, and execution time are not needed for an optimal model. In addition, it can be seen that the gradient vanishing problem is also solved by learning the parameters of the parametric activation function.
The parametric activation function proposed in this paper is an activation function with parameters that can change the size and location of the activation function. From the point of view that an arbitrary activation function can have other functional characteristics as well as size and position, the parametric activation function can be extended to an activation function of a broader concept. In addition, the current activation function is being discussed focusing on specific functions such as Sigmoid, Tanh, and ReLU, but it is necessary to define a more general activation function and to study the various meanings of each activation function along with efforts to find various activation functions based on the definition.
서지정보 내보내기(Export)
닫기소장기관 정보
닫기권호소장정보
닫기오류접수
닫기오류 접수 확인
닫기음성서비스 신청
닫기음성서비스 신청 확인
닫기이용약관
닫기학술연구정보서비스 이용약관 (2017년 1월 1일 ~ 현재 적용)
학술연구정보서비스(이하 RISS)는 정보주체의 자유와 권리 보호를 위해 「개인정보 보호법」 및 관계 법령이 정한 바를 준수하여, 적법하게 개인정보를 처리하고 안전하게 관리하고 있습니다. 이에 「개인정보 보호법」 제30조에 따라 정보주체에게 개인정보 처리에 관한 절차 및 기준을 안내하고, 이와 관련한 고충을 신속하고 원활하게 처리할 수 있도록 하기 위하여 다음과 같이 개인정보 처리방침을 수립·공개합니다.
주요 개인정보 처리 표시(라벨링)
목 차
3년
또는 회원탈퇴시까지5년
(「전자상거래 등에서의 소비자보호에 관한3년
(「전자상거래 등에서의 소비자보호에 관한2년
이상(개인정보보호위원회 : 개인정보의 안전성 확보조치 기준)개인정보파일의 명칭 | 운영근거 / 처리목적 | 개인정보파일에 기록되는 개인정보의 항목 | 보유기간 | |
---|---|---|---|---|
학술연구정보서비스 이용자 가입정보 파일 | 한국교육학술정보원법 | 필수 | ID, 비밀번호, 성명, 생년월일, 신분(직업구분), 이메일, 소속분야, 웹진메일 수신동의 여부 | 3년 또는 탈퇴시 |
선택 | 소속기관명, 소속도서관명, 학과/부서명, 학번/직원번호, 휴대전화, 주소 |
구분 | 담당자 | 연락처 |
---|---|---|
KERIS 개인정보 보호책임자 | 정보보호본부 김태우 | - 이메일 : lsy@keris.or.kr - 전화번호 : 053-714-0439 - 팩스번호 : 053-714-0195 |
KERIS 개인정보 보호담당자 | 개인정보보호부 이상엽 | |
RISS 개인정보 보호책임자 | 대학학술본부 장금연 | - 이메일 : giltizen@keris.or.kr - 전화번호 : 053-714-0149 - 팩스번호 : 053-714-0194 |
RISS 개인정보 보호담당자 | 학술진흥부 길원진 |
자동로그아웃 안내
닫기인증오류 안내
닫기귀하께서는 휴면계정 전환 후 1년동안 회원정보 수집 및 이용에 대한
재동의를 하지 않으신 관계로 개인정보가 삭제되었습니다.
(참조 : RISS 이용약관 및 개인정보처리방침)
신규회원으로 가입하여 이용 부탁 드리며, 추가 문의는 고객센터로 연락 바랍니다.
- 기존 아이디 재사용 불가
휴면계정 안내
RISS는 [표준개인정보 보호지침]에 따라 2년을 주기로 개인정보 수집·이용에 관하여 (재)동의를 받고 있으며, (재)동의를 하지 않을 경우, 휴면계정으로 전환됩니다.
(※ 휴면계정은 원문이용 및 복사/대출 서비스를 이용할 수 없습니다.)
휴면계정으로 전환된 후 1년간 회원정보 수집·이용에 대한 재동의를 하지 않을 경우, RISS에서 자동탈퇴 및 개인정보가 삭제처리 됩니다.
고객센터 1599-3122
ARS번호+1번(회원가입 및 정보수정)