Multi-Class Image Classification Model for American Sign Language Alphabet Using TensorFlow Take 3

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The American Sign Language Alphabet Dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: The dataset contains over 22,000 images of alphabets from American Sign Language, separated into 29 folders that represent the various classes. The research team collected these images to investigate the possibilities of reducing the communication gap between sign-language users and non-Sign language users.

ANALYSIS: The ResNet50V2 model’s performance achieved an accuracy score of 98.45% after five epochs using the training dataset. When we applied the model to the validation dataset, the model achieved an accuracy score of 84.54%.

CONCLUSION: In this iteration, the TensorFlow ResNet50V2 CNN model appeared suitable for modeling this dataset.

Dataset ML Model: Multi-Class classification with numerical features

Dataset Used: American Sign Language Alphabet Dataset

Dataset Reference: https://www.kaggle.com/datasets/debashishsau/aslamerican-sign-language-aplhabet-dataset

One source of potential performance benchmarks: https://www.kaggle.com/datasets/debashishsau/aslamerican-sign-language-aplhabet-dataset/code

The HTML formatted report can be found here on GitHub.