Anders Ericsson and Robert Pool on Peak, Part 2

In their book, Peak: secrets from the new science of expertise, Anders Ericsson and Robert Pool share their research findings and recommendations to help us achieve expert-level performance in whatever we would like to do.

These are some of my favorite recommendations from reading the book.

Chapter 2. Harnessing Adaptability

The human body is incredibly adaptable. It is not just the skeletal muscles but also the heart, the lungs, the circulatory system, and more – everything that goes into physical strength and stamina. From several research studies, we are now learning that the brain has a similar degree and variety of adaptability.

Our brains also did not appear to be hardwired like a computer. We now know that the brain reroutes some of its neurons so that it tries to maximize the use of its available capacity. If we practice something enough, our brains will repurpose neurons to help with the task, even if they already have another job to do.

Our body’s desire for homeostasis can be harnessed to drive changes. If we push it hard enough and long enough, it will respond by changing the ways that make that move easier. This condition explains the importance of staying just outside our comfort zone. We need to push our bodies to keep their compensatory changes coming continually. However, if we go too far outside our comfort zone, we risk injuring ourselves and setting ourselves back.

In the brain, the greater the challenge, the greater the changes – up to a point. Recent studies have shown that learning a new skill is much more effective at triggering structural changes in the brain than simply continuing a skill that one has already learned. However, pushing our brain too hard for too long can lead to burnout and ineffective learning.

The fact that the human brain and body respond to challenges by developing new abilities underlies the effectiveness of purposeful and deliberate practice. Regular training leads to changes in the parts of the brain that are challenged by the exercise. The brain adapts to these challenges by rewiring itself in ways that increase its ability to carry out the required functions.

Ultimately, the cognitive and physical changes caused by training require upkeep. Stop training, and they start to go away. Once we stop using the new capability areas developed by eliminating the ongoing training and maintenance, the brain changes resulting from the original challenge disappear.

企業家所創造的價值

(從我一個尊敬的作家,賽斯·高汀

如果你借了錢或賣了股票,你需要建立一個比你的勞動來的更有價值的東西。以下是價值存在的一些關鍵支柱:

  • 客戶吸引力
  • 許可
  • 分銷
  • 網絡效應
  • 最少數的可行受眾

客戶吸引力是最重要的。每一天,如果你不在了,是不是會有更很多的人會想念你? 會有更多的客戶不想為了節省幾塊錢而轉行? 會有更多組織圍繞您的工作來構建他們的未來?

許可是那些想要從您這裡獲得消息的人,他們給你傳遞預期的、個人的和相關的消息的特權。這不是一種法律上的結構,而是一種情感上的結構。有誰想听聽你所傳的消息?

分銷是衡量品牌的實用方法。你有多少貨架空間?心理上的貨架空間和物理上的貨架空間都是如此。

網絡效應是內置於您的產品或服務中。如果我告訴我的朋友並與他們一起使用,它會變的更好嗎? 這是真的會發生,還是您只是希望如此?

最少數的可行受眾是所有這一切的基石。您是否確切地知道它您的產品是為誰來準備的? 他們有同意接受嗎?

一家初創公司的存在,主要的就是在尋找和構建這樣的資產。

Binary Class Tabular Model for Kaggle Playground Series Season 3 Episode 7 Using Python and AutoKeras

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Playground Series Season 3 Episode 7 dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions to give the Kaggle community a variety of reasonably lightweight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science. The dataset for this competition was generated from a deep learning model trained on the Reservation Cancellation Prediction dataset. Feature distributions are close to but different from the original.

ANALYSIS: After 84 trials, the best AutoKeras model processed the training dataset with the best ROC/AUC score of 0.8613. When we processed the test dataset with the final model, the model achieved a ROC/AUC score of 0.8447.

CONCLUSION: In this iteration, AutoKeras appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Playground Series Season 3, Episode 7

Dataset ML Model: Binary-Class classification with numerical features

Dataset Reference: https://www.kaggle.com/competitions/playground-series-s3e7

One source of potential performance benchmarks: https://www.kaggle.com/competitions/playground-series-s3e7/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Class Tabular Model for Kaggle Playground Series Season 3 Episode 7 Using Python and TensorFlow

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Playground Series Season 3 Episode 7 dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions to give the Kaggle community a variety of reasonably lightweight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science. The dataset for this competition was generated from a deep learning model trained on the Reservation Cancellation Prediction dataset. Feature distributions are close to but different from the original.

ANALYSIS: The performance of the preliminary TensorFlow models achieved a ROC/AUC benchmark of 0.9101 after training. When we processed the test dataset with the final model, the model achieved a ROC/AUC score of 0.8798.

CONCLUSION: In this iteration, TensorFlow appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Playground Series Season 3, Episode 7

Dataset ML Model: Binary-Class classification with numerical features

Dataset Reference: https://www.kaggle.com/competitions/playground-series-s3e7

One source of potential performance benchmarks: https://www.kaggle.com/competitions/playground-series-s3e7/leaderboard

The HTML formatted report can be found here on GitHub.

Binary Class Tabular Model for Kaggle Playground Series Season 3 Episode 7 Using Python and XGBoost

SUMMARY: The project aims to construct a predictive model using various machine learning algorithms and document the end-to-end steps using a template. The Kaggle Playground Series Season 3 Episode 7 dataset is a binary-class modeling situation where we attempt to predict one of two possible outcomes.

INTRODUCTION: Kaggle wants to provide an approachable environment for relatively new people in their data science journey. Since January 2021, they have hosted playground-style competitions to give the Kaggle community a variety of reasonably lightweight challenges that can be used to learn and sharpen skills in different aspects of machine learning and data science. The dataset for this competition was generated from a deep learning model trained on the Reservation Cancellation Prediction dataset. Feature distributions are close to but different from the original.

ANALYSIS: The performance of the preliminary XGBoost model achieved a ROC/AUC benchmark of 0.9099 after training. When we processed the test dataset with the final model, the model achieved a ROC/AUC score of 0.8973.

CONCLUSION: In this iteration, the XGBoost model appeared to be a suitable algorithm for modeling this dataset.

Dataset Used: Playground Series Season 3, Episode 7

Dataset ML Model: Binary-Class classification with numerical features

Dataset Reference: https://www.kaggle.com/competitions/playground-series-s3e7

One source of potential performance benchmarks: https://www.kaggle.com/competitions/playground-series-s3e7/leaderboard

The HTML formatted report can be found here on GitHub.