Binary-Class Model for KDD Cup 1998 Using Python and Scikit-Learn Take 6

Template Credit: Adapted from a template made available by Dr. Jason Brownlee of Machine Learning Mastery.

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Acoustic Extinguisher Fire dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: This is the data set used for The Second International Knowledge Discovery and Data Mining Tools Competition, held in conjunction with KDD-98, The Fourth International Conference on Knowledge Discovery and Data Mining. The modeling task is a binary classification problem where the goal is to estimate the likelihood of donation from a direct mailing campaign.

In the Take1 iteration, we built and tested models using a minimal set of basic features. The model will serve as the baseline result as we add more features in future iterations.

In the Take2 iteration, we built and tested models with additional features from third-party data sources.

In the Take3 iteration, we built and tested models with additional features from the US Census data.

In the Take4 iteration, we built and tested models with additional features from the promotion history data.

In the Take5 iteration, we built and tested models with additional features from the giving history data.

In this iteration, we will build and test models with additional features engineered from the giving history features.

ANALYSIS: In the Take1 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 70.99% using the training dataset. Furthermore, we selected Random Forest as the final model as it processed the training dataset with a final ROC/AUC score of 77.23%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.42%.

In the Take2 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 71.92% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 79.79%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.02%.

In the Take3 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.72% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 85.02%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.20%.

In the Take4 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.56% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.28%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.18%.

In the Take5 iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 72.80% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.39%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.27%.

In this iteration, the average performance of the machine learning algorithms achieved a ROC/AUC benchmark of 71.34% using the training dataset. Furthermore, we selected Extra Trees as the final model as it processed the training dataset with a final ROC/AUC score of 82.69%. When we processed the test dataset with the last model, the model achieved a ROC/AUC score of 50.03%.

CONCLUSION: In this iteration, the Extra Trees model appeared to be suitable for modeling this dataset. However, we should explore the possibilities of using more features from the dataset to model this problem.

CONCLUSION: In this iteration, the Extra Trees model appeared to be suitable for modeling this dataset. However, we should explore the possibilities of using more features from the dataset to model this problem.

Dataset Used: KDD Cup 1998 Dataset

Dataset ML Model: Binary classification with numerical and categorical features

Dataset Reference: https://kdd.org/kdd-cup/view/kdd-cup-1998/Data

The HTML formatted report can be found here on GitHub.