Tag: binary class

Binary Class Tabular Model for Kaggle Playground Series Season 3 Episode 2 Using Python and XGBoost

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Playground Series Season 3 Episode 2 dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions to give the Kaggle community a variety of reasonably lightweight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science. The dataset for this competition was generated from a deep learning model trained on the Stroke Prediction Dataset. Feature distributions are close to but different from the original.

ANALYSIS: The performance of the preliminary XGBoost model achieved a ROC/AUC benchmark of 0.8772 after training. When we processed the test dataset with the final model, the model achieved a ROC/AUC score of 0.8730.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Playground Series Season 3, Episode 2

Dataset ML Model: Binary-Class classification with numerical and categorical features

Dataset Reference: https://www.kaggle.com/competitions/playground-series-s3e2

One source of potential performance benchmarks: https://www.kaggle.com/competitions/playground-series-s3e2/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Class Tabular Model for Kaggle Playground Series Season 3 Episode 2 Using Python and TensorFlow Decision Forests

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Playground Series Season 3 Episode 2 dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions to give the Kaggle community a variety of reasonably lightweight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science. The dataset for this competition was generated from a deep learning model trained on the Stroke Prediction Dataset. Feature distributions are close to but different from the original.

ANALYSIS: The Random Forest model performed the best with the training dataset. The model achieved a ROC/AUC benchmark of 0.9914. When we processed the test dataset with the final model, the model achieved a ROC/AUC score of 0.8731.

CONCLUSION: In this iteration, the Random Forest model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Playground Series Season 3, Episode 2

Dataset ML Model: Binary-Class classification with numerical and categorical features

Dataset Reference: https://www.kaggle.com/competitions/playground-series-s3e2

One source of potential performance benchmarks: https://www.kaggle.com/competitions/playground-series-s3e2/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Class Tabular Model for Kaggle Playground Series Season 3 Episode 2 Using Python and Scikit-Learn

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Playground Series Season 3 Episode 2 dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions to give the Kaggle community a variety of reasonably lightweight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science. The dataset for this competition was generated from a deep learning model trained on the Stroke Prediction Dataset. Feature distributions are close to but different from the original.

ANALYSIS: The average performance of the machine learning algorithms achieved an AUC/ROC benchmark of 0.7836 after training. Furthermore, we selected Logistic Regression as the final model as it processed the training dataset with an AUC/ROC score of 0.8735. When we processed the test dataset with the final model, the model achieved an AUC/ROC score of 0.8662.

CONCLUSION: In this iteration, the Logistic Regression model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Playground Series Season 3, Episode 2

Dataset ML Model: Binary-Class classification with numerical and categorical features

Dataset Reference: https://www.kaggle.com/competitions/playground-series-s3e2

One source of potential performance benchmarks: https://www.kaggle.com/competitions/playground-series-s3e2/leaderboard

The HTML formatted report can be found here on GitHub.

Binary-Class Model for KDD Cup 1998 Using Python and Scikit-Learn Take 6

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Acoustic Extinguisher Fire dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: This is the data set used for The Second International Knowledge Discovery and Data Mining Tools Competition, held in conjunction with KDD-98, The Fourth International Conference on Knowledge Discovery and Data Mining. The modeling task is a binary classification problem where the goal is to estimate the likelihood of donation from a direct mailing campaign.

In the Take1 iteration, we built and tested models using a minimal set of basic features. The model will serve as the baseline result as we add more features in future iterations.

In the Take2 iteration, we built and tested models with additional features from third-party data sources.

In the Take3 iteration, we built and tested models with additional features from the US Census data.

In the Take4 iteration, we built and tested models with additional features from the promotion history data.

In the Take5 iteration, we built and tested models with additional features from the giving history data.

In this iteration, we will build and test models with additional features engineered from the giving history features.

ANALYSIS: In the Take1 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 70.99% using the training dataset. Furthermore, we selected Random Forest as the final model as it processed the training dataset with a final ROC/AUC score of 77.23%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.42%.

In the Take2 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 71.92% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 79.79%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.02%.

In the Take3 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.72% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 85.02%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.20%.

In the Take4 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.56% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.28%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.18%.

In the Take5 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.80% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.39%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.27%.

In this iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 71.34% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.69%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.03%.

CONCLUSION: In this iteration, the Extra Trees model appeared to be suitable for modeling this dataset. However, we should explore the possibilities of using more features from the dataset to model this problem.

CONCLUSION: In this iteration, the Extra Trees model appeared to be suitable for modeling this dataset. However, we should explore the possibilities of using more features from the dataset to model this problem.

Dataset Used: KDD Cup 1998 Dataset

Dataset ML Model: Binary classification with numerical and categorical features

Dataset Reference: https://kdd.org/kdd-cup/view/kdd-cup-1998/Data

The HTML formatted report can be found here on GitHub.

Binary-Class Model for KDD Cup 1998 Using Python and Scikit-Learn Take 5

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Acoustic Extinguisher Fire dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: This is the data set used for The Second International Knowledge Discovery and Data Mining Tools Competition, held in conjunction with KDD-98, The Fourth International Conference on Knowledge Discovery and Data Mining. The modeling task is a binary classification problem where the goal is to estimate the likelihood of donation from a direct mailing campaign.

In the Take1 iteration, we built and tested models using a minimal set of basic features. The model will serve as the baseline result as we add more features in future iterations.

In the Take2 iteration, we built and tested models with additional features from third-party data sources.

In the Take3 iteration, we built and tested models with additional features from the US Census data.

In the Take4 iteration, we built and tested models with additional features from the promotion history data.

In this iteration, we will build and test models with additional features from the giving history data.

ANALYSIS: In the Take1 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 70.99% using the training dataset. Furthermore, we selected Random Forest as the final model as it processed the training dataset with a final ROC/AUC score of 77.23%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.42%.

In the Take2 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 71.92% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 79.79%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.02%.

In the Take3 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.72% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 85.02%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.20%.

In the Take4 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.56% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.28%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.18%.

In this iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.80% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.39%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.27%.

CONCLUSION: In this iteration, the Extra Trees model appeared to be suitable for modeling this dataset. However, we should explore the possibilities of using more features from the dataset to model this problem.

Dataset Used: KDD Cup 1998 Dataset

Dataset ML Model: Binary classification with numerical and categorical features

Dataset Reference: https://kdd.org/kdd-cup/view/kdd-cup-1998/Data

The HTML formatted report can be found here on GitHub.