Comparison of Harris Performance as Activation Function to Rectified Linear Unit (ReLU), Leaky ReLU, and Tanh in Convolutional Neural Network for Image Classification

Authors

DOI:

https://doi.org/10.70922/4cya0y10

Keywords:

Harris , Activation Function, convolutional neural network (CNN), ReLU

Abstract

Activation functions (AFs) are the building blocks of a deep neural network (DNN) to perform image classification effectively by handling nonlinear data and extracting complex features and patterns. This paper introduces a new activation function (AF) called “Harris” a piecewise and nonlinear nonmonotonic AF inspired from the field of photonics. The AF was integrated to a simple convolutional neural network (CNN) using Canadian Institute for Advanced Research (CIFAR-10) dataset to determine the model’s performance in terms of accuracy in training and testing, image classification capability, and feature extraction.  Harris was able to exceed the leaky Rectified Linear Unit (ReLU) and hyperbolic tangent function (tanh) target accuracies from α-values −0.80 to −1.00 in image classification, while the testing accuracies were able to exceed the target accuracies of ReLU from α-values −0.80 to −0.95. It was able to handle negative values solving the dead neuron problem and extract complex features through its feature maps which improve the F1-scores of the CNN model in image classification.

Downloads

Download data is not yet available.

References

Harris, F. (2007). Streamlining Digital Signal Processing: A Tricks of the Trade Guidebook (pp. 85–104).

Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.

Nguyen, A., Pham, K., Ngo, D., Ngo, T., & Pham, L. (2021). An analysis of state-of-the-art activation functions for supervised deep neural network. In 2021 International conference on system science and engineering (ICSSE) (pp. 215–220). IEEE.

Obla, S., Gong, X., Aloufi, A., Hu, P., & Takabi, D. (2020). [Articlemessmemess Title]. IEEE Access, 8, 153098–153112.

Pattanayak, S. (2023). Pro Deep Learning with TensorFlow 2.

Roy, S. K., Manna, S., Dubey, S. R., & Chaudhuri, B. B. (2022). Lisht: Non-parametric linearly scaled hyperbolic tangent activation function for neural networks. In International Conference On Computer Vision And Image Processing (pp. 462–476). Springer.

Segawa, R., Hayashi, H., & Fujii, S. (2020). Proposal of new activation function in deep image prior. IEEJ Transactions on Electrical and Electronic Engineering, 15, 1248–1249.

Zhu, H., Zeng, H., Liu, J., & Zhang, X. (2021). Logish: A new nonlinear nonmonotonic activation function for convolutional neural network. Neurocomputing, 458, 490–499.

Downloads

Published

2025-12-04

Data Availability Statement

We allow for the data to be available for the readers and future researchers continue the current progress of this research.

How to Cite

Villacruz, L., & Mendoza, M. L. B. (2025). Comparison of Harris Performance as Activation Function to Rectified Linear Unit (ReLU), Leaky ReLU, and Tanh in Convolutional Neural Network for Image Classification. PUP Journal of Science & Technology, 18(1), 1-17. https://doi.org/10.70922/4cya0y10

Similar Articles

You may also start an advanced similarity search for this article.