Artificial neural networks solve problems in various fields through linear and non-linear transformations. The activation function responsible for the nonlinear transformation is a major factor in the performance of artificial neural networks. In this study, the performance of artificial neural networks could be improved by proposing a parametric activation function related to the loss function. A deep neural network with multiple hidden layers uses a method to solve problems by repeatedly applying the affine transform, which is a linear model, and the activation function, which is a non-linear transform. The Sigmoid activation function has a non-linear transformation characteristic and a good characteristic of convergence at a specific point, but a solution to the computation cost and gradient vanishing problem is needed. The most commonly used ReLU activation function is fast, but it requires a linear transformation characteristic according to the certain section and a solution for the section where the result value is 0. Many of the activation functions currently used are using the ReLU activation function and the Sigmoid activation function with various modifications. The parametric activation function was proposed by expanding the meaning of the activation function by finding the problems of the conventional activation function. The first problem with the conventional activation function is that it is a nonlinear transformation that is not related to the direction of minimizing the loss function. The second is the problem of gradient vanishing, which is one of the main problems of artificial neural networks. Unlike conventional activation functions, parametric activation function trained in the direction of decreasing the loss function to calculate the differential coefficient of the loss function for the parameter and optimize it to reduce the loss function value during error backpropagation. This ensure the performance of artificial neural networks. By introducing parameters that can transform input data into various forms, we give degrees of freedom to transform the activation function. To check the performance of the parametric activation function, a deep neural network and a convolutional neural network were developed and parametric Sigmoid and parametric ReLU functions were applied to the XOR problem and MNIST data. The first experiment was applied to the XOR problem using a network with a minimum number of hidden layers and a minimum number of nodes. The XOR problem is the simplest nonlinear problem that cannot be solved linearly. A model with 1 hidden layer and 2 nodes in the hidden layer was applied. As a result of the experiment, in such a very simple model, ReLU frequently diverges without learning, and the Sigmoid function was stably executed. In addition, it was confirmed that the parametric Sigmoid and parametric ReLU activation functions showed better performance compared to the conventional Sigmoid and conventional ReLU activation functions. The second experiment confirmed the performance by applying MNIST data to deep neural network models with four different structures according to the number of hidden layers and the number of nodes in the hidden layers. Parametric activation function was applied to 4 models with 1, 3 hidden layers and 20, 200 hidden layer nodes. As a result of the experiment, both the parametric Sigmoid and parametric ReLU activation functions showed better performance than the conventional functions. The third experiment confirmed the performance of parametric Sigmoid and parametric ReLU activation functions using MNIST data in a convolutional neural network. Convolutional neural networks use activation functions in the convolutional layer and the fully connected layer. The convolutional neural network used in the experiment used 3 convolutional layers, 1 fully connected layer, and 16, 8, and 4 filters in each convolutional layer, and the number of nodes in the fully connected layer is 30. It was found that the performance was improved by constantly optimizing the parameters of the parametric activation function in the direction of minimizing the loss function. Through experiments with various models, it was confirmed that the parametric activation function showed better performance compared to the conventional ReLU and conventional Sigmoid activation functions by non-linear transformation in the direction of continuously decreasing the loss function value at each step. The parametric activation function is meaningful in that it can contribute to the performance improvement of deep learning and expands the meaning of the activation function used in the existing artificial neural network. In deep learning, if a parametric activation function related to the loss function is applied, so many hidden layers, nodes, and execution time are not needed for an optimal model. In addition, it can be seen that the gradient vanishing problem is also solved by learning the parameters of the parametric activation function. The parametric activation function proposed in this paper is an activation function with parameters that can change the size and location of the activation function. From the point of view that an arbitrary activation function can have other functional characteristics as well as size and position, the parametric activation function can be extended to an activation function of a broader concept. In addition, the current activation function is being discussed focusing on specific functions such as Sigmoid, Tanh, and ReLU, but it is necessary to define a more general activation function and to study the various meanings of each activation function along with efforts to find various activation functions based on the definition.