Decision tree classification | Decision Tree Classification Guide

Contents

Overview

  • What is the Decision Classification Tree Algorithm?
  • How to build a decision tree from scratch
  • Decision Tree Terminologies
  • Difference between random forest and decision tree
  • Decision Trees Python Code Implementation

There are several algorithms in machine learning for regression and classification problems, but opting for The best and most efficient algorithm for the given data set is the main point to make while developing a good machine learning model..

One of these algorithms good for classification problems / categorical and regression is the decision tree

Decision trees generally implement exactly human thinking ability when making a decision, so it's easy to understand.

The logic behind the decision tree can be easily understood because it shows a flowchart type structure / tree-like structure that makes it easy to view and extract information from the background process.

75351veeterzy-smqil_2v4vs-unsplash-6299242

Table of Contents

  1. What is a decision tree?
  2. Decision tree elements
  3. How to make a decision from scratch
  4. How does the decision tree algorithm work?
  5. Knowledge of EDA (exploratory data analysis)
  6. Decision trees and random forests
  7. Advantages of Decision Forest
  8. Disadvantages of Decision Forest
  9. Python code implementation

1. What is a decision tree?

A decision tree is a supervised machine learning algorithm. Used in both classification and regression algorithms.. The decision tree is like a tree with nodes. The branches depend on several factors. Splits the data into branches like these until it reaches a threshold value. A decision tree consists of the root nodes, child nodes and leaf nodes.

Let's understand the decision tree methods by taking a real life scenario

Imagine that you play soccer every Sunday and you always invite your friend to play with you. Sometimes, your friend comes and others don't.

The factor of coming or not depends on numerous things, like the weather, the temperature, wind and fatigue. We started to take all of these characteristics into consideration and started to track them along with your friend's decision to come play or not..

You can use this data to predict whether your friend will come to play soccer or not. The technique you could use is a decision tree. This is what the decision tree would look like after deployment:

23016pic-1236422

2. Elements of a decision tree

Each decision tree consists of the following list of elements:

a node

b Edges

c Root

d Leaves

a) Nodes: It is the point where the tree is divided according to the value of some attribute / dataset characteristic.

b) Edges: Directs the result of a division to the next node that we can see in the previous figure that there are nodes for features such as perspective, humidity and wind. There is an advantage for each potential value of each of those attributes / features.

c) Root: This is the node where the first division takes place.

d) Leaves: These are the terminal nodes that predict the outcome of the decision tree.

3. How to build decision trees from scratch?

When creating a decision tree, the main thing is to select the best attribute from the list of total characteristics of the dataset for the root node and for the subnodes. The selection of the best attributes is accomplished with the help of a technique known as an attribute selection measure. (ASM).

With the help of ASM, we can easily select the best characteristics for the respective decision tree nodes.

There are two techniques for ASM:

a) Information gain

b) Gini index

a) Information gain:

1 Information gain is the measurement of changes in entropy value after division / dataset segmentation based on an attribute.

2 Indicates how much information a feature provides us / attribute.

3 Following the value of information gain, node division and decision tree construction are underway.

The decision tree 4 always tries to maximize the value of the information gain, and a node / attribute that has the highest value of the information gain is divided first. Information gain can be calculated using the following formula:

Information Gain = Entropy (S) – [(Weighted Avg) *Entropy(each feature)

Entropy: Entropy signifies the randomness in the dataset. It is being defined as a metric to measure impurity. Entropy can be calculated as:

Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)

Where,

S= Total number of samples

P(yes)= probability of yes

P(no)= probability of no.

b) Gini Index:

Gini index is also being defined as a measure of impurity/ purity used while creating a decision tree in the CART(known as Classification and Regression Tree) algorithm.

An attribute having a low Gini index value should be preferred in contrast to the high Gini index value.

It only creates binary splits, and the CART algorithm uses the Gini index to create binary splits.

Gini index can be calculated using the below formula:

Gini Index= 1- ∑jPj2

Where pj stands for the probability

4. How Does the Decision Tree Algorithm works?

The basic idea behind any decision tree algorithm is as follows:

1. Select the best Feature using Attribute Selection Measures(ASM) to split the records.

2. Make that attribute/feature a decision node and break the dataset into smaller subsets.

3 Start the tree-building process by repeating this process recursively for each child until one of the following condition is being achieved :

a) All tuples belonging to the same attribute value.

b) There are no more of the attributes remaining.

c ) There are no more instances remaining.

5. Decision Trees and Random Forests

Decision trees and Random forest are both the tree methods that are being used in Machine Learning.

Decision trees are the Machine Learning models used to make predictions by going through each and every feature in the data set, one-by-one.

Random forests on the other hand are a collection of decision trees being grouped together and trained together that use random orders of the features in the given data sets.

Instead of relying on just one decision tree, the random forest takes the prediction from each and every tree and based on the majority of the votes of predictions, and it gives the final output. In other words, the random forest can be defined as a collection of multiple decision trees.

860360_yewfetxqgpb8adfv-3313543

6. Advantages of the Decision Tree

1 It is simple to implement and it follows a flow chart type structure that resembles human-like decision making.

2 It proves to be very useful for decision-related problems.

3 It helps to find all of the possible outcomes for a given problem.

4 There is very little need for data cleaning in decision trees compared to other Machine Learning algorithms.

5 Handles both numerical as well as categorical values

7. Disadvantages of the Decision Tree

1 Too many layers of decision tree make it extremely complex sometimes.

2 It may result in overfitting ( which can be resolved using the Random Forest algorithm)

3 For the more number of the class labels, the computational complexity of the decision tree increases.

8. Python Code Implementation

#Numerical computing libraries

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

#Loading Data

raw_data = pd.read_csv('kyphosis.csv')
raw_data.columns
Index(['Kyphosis', 'Age', 'Number', 'Start'], dtype = "object")

#Exploratory data analysis

raw_data.info()
sns.pairplot(raw_data, hue="Kyphosis")
42140download2011-6768570

# Divide the data set into training data and test data

from sklearn.model_selection import train_test_split
x = raw_data.drop('Kyphosis', axis = 1)
y = raw_data['Kyphosis']
x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x, Y, test_size = 0.3)

#Train the decision tree model

from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(x_training_data, y_training_data)
predictions = model.predict(x_test_data)

# Measure performance of the decision tree model

from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(y_test_data, predictions))
print(confusion_matrix(y_test_data, predictions))

With this I end this blog.
Hi everyone, Namaste
My name is Pranshu Sharma and i'm a data science enthusiast

19299img_20201025_215043_588-3585604

Thank you very much for taking your valuable time to read this blog.. Feel free to point out any errors (after all, i am an apprentice) and provide the corresponding comments or leave a comment.

Dhanyvaad !!
Feedback:
Email: [email protected]

The media shown in this DataPeaker article is not the property of DataPeaker and is used at the Author's discretion.

Subscribe to our Newsletter

We will not send you SPAM mail. We hate it as much as you.