TY - GEN
T1 - Analysis of The Manhattan Update Rule Algorithm
AU - Chabane, Lylia Thiziri
AU - Pham, Dang Kièn Germain
AU - Desgreys, Patricia
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - In order to overcome the limitations of traditional computer media, many researchers are turning to analog neural networks to get closer to the functioning of the brain. It is then necessary to use HW-friendly algorithms such as the Manhattan Update Rule (MUR) which is a version of the Back-Propagation (BP) algorithm compatible with the HW implementation. Although many studies use this algorithm for their hardware implementation of neural networks, no article proposes a study upstream of the latter. In this article, we propose an analysis methodology of the Manhattan algorithm allowing us to choose the weight update value ∆ω in order to obtain a minimum of 90% of accuracy. Also, we have answered the questions raised by the state of the art: we have therefore shown that it is possible to achieve the same performance as the BP in terms of precision (3.1% difference at max) and convergence speed. We have shown the link between the dependence of the number of epochs on the initialization of the weights, with the size of the network and the database. Finally, we gave indications for the choice of the version of the MUR (batch or stochastic) according to their speed of convergence.
AB - In order to overcome the limitations of traditional computer media, many researchers are turning to analog neural networks to get closer to the functioning of the brain. It is then necessary to use HW-friendly algorithms such as the Manhattan Update Rule (MUR) which is a version of the Back-Propagation (BP) algorithm compatible with the HW implementation. Although many studies use this algorithm for their hardware implementation of neural networks, no article proposes a study upstream of the latter. In this article, we propose an analysis methodology of the Manhattan algorithm allowing us to choose the weight update value ∆ω in order to obtain a minimum of 90% of accuracy. Also, we have answered the questions raised by the state of the art: we have therefore shown that it is possible to achieve the same performance as the BP in terms of precision (3.1% difference at max) and convergence speed. We have shown the link between the dependence of the number of epochs on the initialization of the weights, with the size of the network and the database. Finally, we gave indications for the choice of the version of the MUR (batch or stochastic) according to their speed of convergence.
KW - Analog Neural Network
KW - Hardware Friendly Algorithm
KW - Manhattan Update Rule
UR - https://www.scopus.com/pages/publications/85145348985
U2 - 10.1109/ICECS202256217.2022.9971047
DO - 10.1109/ICECS202256217.2022.9971047
M3 - Conference contribution
AN - SCOPUS:85145348985
T3 - ICECS 2022 - 29th IEEE International Conference on Electronics, Circuits and Systems, Proceedings
BT - ICECS 2022 - 29th IEEE International Conference on Electronics, Circuits and Systems, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 29th IEEE International Conference on Electronics, Circuits and Systems, ICECS 2022
Y2 - 24 October 2022 through 26 October 2022
ER -