Comparison of Harris Performance as Activation Function to Rectified Linear Unit (ReLU), Leaky ReLU, and Tanh in Convolutional Neural Network for Image Classification
DOI:
https://doi.org/10.70922/4cya0y10Keywords:
Harris , Activation Function, convolutional neural network (CNN), ReLUAbstract
Activation functions (AFs) are the building blocks of a deep neural network (DNN) to perform image classification effectively by handling nonlinear data and extracting complex features and patterns. This paper introduces a new activation function (AF) called “Harris” a piecewise and nonlinear nonmonotonic AF inspired from the field of photonics. The AF was integrated to a simple convolutional neural network (CNN) using Canadian Institute for Advanced Research (CIFAR-10) dataset to determine the model’s performance in terms of accuracy in training and testing, image classification capability, and feature extraction. Harris was able to exceed the leaky Rectified Linear Unit (ReLU) and hyperbolic tangent function (tanh) target accuracies from α-values −0.80 to −1.00 in image classification, while the testing accuracies were able to exceed the target accuracies of ReLU from α-values −0.80 to −0.95. It was able to handle negative values solving the dead neuron problem and extract complex features through its feature maps which improve the F1-scores of the CNN model in image classification.
Downloads
References
Harris, F. (2007). Streamlining Digital Signal Processing: A Tricks of the Trade Guidebook (pp. 85–104).
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
Nguyen, A., Pham, K., Ngo, D., Ngo, T., & Pham, L. (2021). An analysis of state-of-the-art activation functions for supervised deep neural network. In 2021 International conference on system science and engineering (ICSSE) (pp. 215–220). IEEE.
Obla, S., Gong, X., Aloufi, A., Hu, P., & Takabi, D. (2020). [Articlemessmemess Title]. IEEE Access, 8, 153098–153112.
Pattanayak, S. (2023). Pro Deep Learning with TensorFlow 2.
Roy, S. K., Manna, S., Dubey, S. R., & Chaudhuri, B. B. (2022). Lisht: Non-parametric linearly scaled hyperbolic tangent activation function for neural networks. In International Conference On Computer Vision And Image Processing (pp. 462–476). Springer.
Segawa, R., Hayashi, H., & Fujii, S. (2020). Proposal of new activation function in deep image prior. IEEJ Transactions on Electrical and Electronic Engineering, 15, 1248–1249.
Zhu, H., Zeng, H., Liu, J., & Zhang, X. (2021). Logish: A new nonlinear nonmonotonic activation function for convolutional neural network. Neurocomputing, 458, 490–499.
Downloads
Published
Data Availability Statement
We allow for the data to be available for the readers and future researchers continue the current progress of this research.
Issue
Section
License
Copyright (c) 2025 PUP Journal of Science & Technology

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




