Member-only story

Decision Tree vs Random Forest | Supervised Learning | Day (6/45) | A2Z ML | Mohd Saqib

Mohd Saqib
14 min readJan 27, 2023

Read my previous blog if you have not covered yet — Prev
In this blog, we will delve deeper into the world of Decision Tree, Random Forest, and explore some common questions that arise during their implementation.

  1. How do I choose the root node for my tree?
  2. How do I determine the best attribute to split on at each internal node?
  3. What should be the criteria for stopping tree growth?
  4. Random forest

Index

  1. Decision Tree
    - Information Gain, and Gini Index
    - ID3, C4.5, and CART
  2. Random Forest

Decision Tree

Choosing the root node:

Information gain and Gini index are two commonly used measures for evaluating the quality of a split in a decision tree. They are used to determine which feature should be selected as the root or internal node.

  1. Information Gain:

Information gain measures the decrease in entropy after a dataset is split on an attribute. Entropy is a measure of impurity in the dataset. The attribute with the highest information gain is chosen as the root or internal node.

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Already have an account? Sign in

Mohd Saqib
Mohd Saqib

Written by Mohd Saqib

Scholar @ McGill University, Canada | IIT (ISM) | AMU | Travel | msaqib.cs@gmail.com

No responses yet

Write a response