Month: December 2021

Image Object Detection Model for Random Sample Images Using TensorFlow Take 6

Template Credit: Adapted from an Object Detection tutorial on TensorFlow.org.

Additional Notes: I adapted this workflow from the TensorFlow Object Detection tutorial on TensorFlow.org. I plan to build a script for building future projects using object detection models.

SUMMARY: This project aims to construct an object detection model using the TensorFlow-based neural network and document the end-to-end steps using a template.

This iteration will use the TF2 Mask R-CNN Inception ResNet V2 1024×1024 object detection model to test some sample images. The model was constructed using the Mask R-CNN Object detection model and trained on COCO 2017 dataset with training images scaled to 1024×1024.

Images Used: 1. Airport runway; 2. Tanzania Safari; 3. Streets with Cars; 4. Public Library

Dataset ML Model: Image Object Detection using TensorFlow Hub Models

Additional References: https://tfhub.dev/s?module-type=image-object-detection

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Crop Mapping with Fused Optical and Radar Data Using TensorFlow

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Crop Mapping with Fused Optical Radar Data dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: This dataset combines optical and PolSAR remote sensing images for cropland classification. The organization collected the images using RapidEye satellites (optical) and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system (radar) over an agricultural region near Winnipeg, Manitoba, Canada, in 2012. There are two sets of 49-radar features and two sets of 38-optical features for 05 and 14 July 2012. Seven crop type classes exist for this data set: 1-Corn; 2-Peas; 3-Canola; 4-Soybeans; 5-Oats; 6-Wheat; and 7-Broadleaf.

ANALYSIS: The performance of the preliminary TensorFlow models achieved an average accuracy benchmark of 0.9942 after running for 20 epochs. When we applied the final model to the test dataset, the model achieved an accuracy score of 0.9951.

CONCLUSION: In this iteration, the simple TensorFlow model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Crop Mapping with Fused Optical Radar Data

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/crop+mapping+using+fused+optical+radar+data+set

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Crop Mapping with Fused Optical and Radar Data Using TensorFlow Decision Forests

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Crop Mapping with Fused Optical Radar Data dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: This dataset combines optical and PolSAR remote sensing images for cropland classification. The organization collected the images using RapidEye satellites (optical) and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system (radar) over an agricultural region near Winnipeg, Manitoba, Canada, in 2012. There are two sets of 49-radar features and two sets of 38-optical features for 05 and 14 July 2012. Seven crop type classes exist for this data set: 1-Corn; 2-Peas; 3-Canola; 4-Soybeans; 5-Oats; 6-Wheat; and 7-Broadleaf.

ANALYSIS: The performance of the preliminary Gradient Boosted Trees model achieved an accuracy benchmark of 0.9976 on the validation dataset. The final model processed the validation dataset with a final accuracy score of 0.9999. When we applied the finalized model to Kaggle’s test dataset, the model achieved an accuracy score of 0.9996.

CONCLUSION: In this iteration, the TensorFlow Decision Forests model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Crop Mapping with Fused Optical Radar Data

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/crop+mapping+using+fused+optical+radar+data+set

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Crop Mapping with Fused Optical and Radar Data Using XGBoost

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Crop Mapping with Fused Optical Radar Data dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: This dataset combines optical and PolSAR remote sensing images for cropland classification. The organization collected the images using RapidEye satellites (optical) and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system (radar) over an agricultural region near Winnipeg, Manitoba, Canada, in 2012. There are two sets of 49-radar features and two sets of 38-optical features for 05 and 14 July 2012. Seven crop type classes exist for this data set: 1-Corn; 2-Peas; 3-Canola; 4-Soybeans; 5-Oats; 6-Wheat; and 7-Broadleaf.

ANALYSIS: The performance of the preliminary XGBoost model achieved an accuracy benchmark of 0.9977. After a series of tuning trials, the final model processed the training dataset with an accuracy score of 0.9982. When we processed the test dataset with the final model, the model achieved an accuracy score of 0.9982.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Crop Mapping with Fused Optical Radar Data

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/crop+mapping+using+fused+optical+radar+data+set

The HTML formatted report can be found here on GitHub.

Multi-Class Model for Crop Mapping with Fused Optical and Radar Data Using Scikit-learn

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: This project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Crop Mapping with Fused Optical Radar Data dataset is a multi-class modeling situation where we attempt to predict one of several (more than two) possible outcomes.

INTRODUCTION: This dataset combines optical and PolSAR remote sensing images for cropland classification. The organization collected the images using RapidEye satellites (optical) and the Unmanned Aerial Vehicle Synthetic Aperture Radar (UAVSAR) system (radar) over an agricultural region near Winnipeg, Manitoba, Canada, in 2012. There are two sets of 49-radar features and two sets of 38-optical features for 05 and 14 July 2012. Seven crop type classes exist for this data set: 1-Corn; 2-Peas; 3-Canola; 4-Soybeans; 5-Oats; 6-Wheat; and 7-Broadleaf.

ANALYSIS: The average performance of the machine learning algorithms achieved an accuracy benchmark of 0.9908 using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final accuracy score of 0.9975. When we processed the test dataset with the final model, the model achieved an accuracy score of 0.9976.

CONCLUSION: In this iteration, the Extra Trees model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Crop Mapping with Fused Optical Radar Data

Dataset ML Model: Multi-class classification with numerical attributes

Dataset Reference: https://archive-beta.ics.uci.edu/ml/datasets/crop+mapping+using+fused+optical+radar+data+set

The HTML formatted report can be found here on GitHub.