EXTENSIVE ERROR DERIVATIVE REVIEW OF LSTM MODELS WITH SIGN LANGUAGE INTERPRETATION

EXTENSIVE ERROR DERIVATIVE REVIEW OF LSTM MODELS WITH SIGN LANGUAGE INTERPRETATION

H. Kar, P. Viswanathan

[PDF]

Abstract

LSTM models are essential for systems that translate sign language, where the model suffers from error loss when processing data. LSTMs reduce error propagation by continuously calculating gradients, unlike traditional back propagation, which causes exponential error accumulation. This paper investigates error flow in bidirectional, hierarchical, and probabilistic long short-term memory models (LSTMs). While hierarchical LSTMs employ multitask learning to anticipate inputs and outputs, minimizing compounding mistakes reliably, bidirectional LSTMs reduce truncation errors. Model accuracy is increased by optimizing the gradients and parameters. This research offers a thorough evaluation of LSTM models from 2021 to 2024, examining their effectiveness in sign language recognition systems by analyzing both accuracy and loss.

Keywords

RNN, LSTM, Bidirectional LSTM, Bayesian LSTM, Hierarchical LSTM, Parametric.