Member-only story
Decision Tree vs Random Forest | Supervised Learning | Day (6/45) | A2Z ML | Mohd Saqib
Read my previous blog if you have not covered yet — Prev
In this blog, we will delve deeper into the world of Decision Tree, Random Forest, and explore some common questions that arise during their implementation.

- How do I choose the root node for my tree?
- How do I determine the best attribute to split on at each internal node?
- What should be the criteria for stopping tree growth?
- Random forest
Index
- Decision Tree
- Information Gain, and Gini Index
- ID3, C4.5, and CART - Random Forest
Decision Tree
Choosing the root node:
Information gain and Gini index are two commonly used measures for evaluating the quality of a split in a decision tree. They are used to determine which feature should be selected as the root or internal node.
- Information Gain:
Information gain measures the decrease in entropy after a dataset is split on an attribute. Entropy is a measure of impurity in the dataset. The attribute with the highest information gain is chosen as the root or internal node.