Previously from writing long paragraphs and explanations of a particular
topic or decision to make a person understand, we now have moved forward with
the concept of decision tree which is a simple way to visualize a decision.
A decision tree is a flowchart-like
structure in which each internal node represents a “test” on an attribute. A decision tree is
a decision support tool that uses a tree-like graph or model of decisions and
their possible consequences, including chance event outcomes, resource costs,
and utility, each branch represents the outcome of the test, and each leaf node
represents a class label. The paths from the root to leaf represent classification
rules. Tree-based learning is proved to be simpler as it gives a clear
idea about the decision. They are commonly used as it gives more accuracy. It
has influenced a wide area of machine learning,
covering both classification and
Tree | Insideaiml
As the name suggests tree, this flowchart is drawn upside down having its
root at the top. Some of the terms used for decision tree are root node,
splitting, decision node, leaf/terminal node, branch/sub-tree, parent &
child node. Decision trees have a natural “if … then … else …” construction that
makes it fit easily into a programmatic structure.
One of the example is
showed in the image
flowchart | Insideaiml
A decision tree is often a generalization of the experts’
experience, a means of sharing knowledge of a particular process. For example,
when there was no scalable machine learning algorithms, the credit scoring task
in the banking sector was solved by experts. The decision to grant a loan was
made on the basis of some empirically derived rules that could be represented
as a decision tree.
Decision Tree | Insideaiml
They likewise are appropriate to order issues where characteristics or highlights are deliberately checked to decide a last classification. For instance, a choice tree could be utilized adequately to decide the types of a creature.
How does Decision Tree Works?
Choice tree is a sort of regulated learning calculation (having a pre-characterized target variable) that is for the most part utilized in characterization issues. It works for both straight out and persistent info and yield factors. In this method, we split the populace or test into at least two homogeneous sets (or sub-populaces) in view of most critical splitter/differentiator in input factors.
Types of Decision Trees:
Variable Decision Tree: Decision tree which has a categorical target variable then it called a categorical
variable decision tree. E.g.:- In the above scenario of student problem, where the
target variable was “Student will play cricket or not” i.e. YES or NO.
Variable Decision Tree: Decision tree has continuous target variable then it is called as Continuous Variable
while creating Decision Tree:
beginning, the whole training set is considered as the root.
Feature values are preferred to be categorical. If the values are continuous then they are discretized prior to building the model.
Records are distributed recursively on the basis of attribute values.
Order to placing attributes as root or internal node of the tree is done by using some statistical approach.
Advantages of Decision
1. Simple to Comprehend: Decision tree yield is straightforward in any event, for individuals from the non-systematic foundation. It doesn't require any factual information to peruse and decipher them. Its graphical portrayal is natural and clients can without much of a stretch relate their theory.
2. Valuable in Data exploration: Decision tree is one of the quickest methods to distinguish the most noteworthy factors and connection between at least two factors. With the assistance of choice trees, we can make new factors/highlights that have a better capacity to foresee the target variable. It can likewise be utilized in the information investigation stage. For e.g., we are chipping away at an issue where we have data accessible in several factors, their choice tree will assist with recognizing the most noteworthy variable.
3. Decision trees certainly perform variable screening or highlight choice.
4. Decision trees require relatively little exertion from clients for information readiness.
5. Less information cleaning required: It requires less information cleaning contrasted with some other demonstrating procedures. It isn't affected by anomalies and missing qualities to a reasonable degree.
6. Information type isn't a constraint: It can deal with both numerical and unmitigated factors. Can also handle multi-yield issues.
7. Non-Parametric Method: Decision tree is viewed as a non-parametric technique. This implies that choice trees have no suppositions about the space dispersion and the classifier structure.
8. Non-straight connections between boundaries don't influence tree execution.
9. The quantity of hyper-boundaries to be tuned is practically invalid.
Disadvantages of Decision Tree:
1. Overfitting: Decision-tree students can make over-complex trees that don't sum up the information well. This is called overfitting. Overfitting is one of the most useful troubles for decision tree models. This issue gets comprehended by setting imperatives on model boundaries and pruning.
2. Not fit for persistent factors: While working with persistent numerical factors, the choice tree loses data, when it classifies factors in various classifications.
3. Choice trees can be precarious in light of the fact that little varieties in the information may bring about a totally extraordinary tree being created. This is called variance, which should be brought down by strategies like bagging and boosting.
4. Greedy algorithms can't ensure to restore the comprehensively ideal choice tree. This can be relieved via preparing numerous trees, where the highlights and tests are arbitrarily inspected with substitution.
5. Decision tree students create biased trees if a few classes command. It is along these lines prescribed to adjust the informational collection preceding fitting with the decision tree.
6. Data gain in a choice tree with straight out factors gives a one-sided reaction for qualities with more noteworthy no. of classifications.
7. By and large, it gives low forecast exactness for a dataset when contrasted with other AI calculations.
8. Figurings can become complex when there are many class mark.
Examples of Decision trees can include:
Manufacturing – Chemical material
evaluation for manufacturing.
Production – Process Optimization in electrochemical machining.
Biomedical Engineering – Identifying features to be used in implantable devices.
Planning – Scheduling of printed circuit board assembly lines.
Medicine – Analysis of the Sudden Infant Death Syndrome(SIDS).
One more general example of a decision tree is shown in the
Decision Tree Examples | Insideaiml
As a result, the decision making tree is one of the more
popular classification algorithms being used not only in machine learning but
in day to day life as well. Because of their simplicity, tree diagrams have
been used in a broad range of industries and disciplines including civil
planning, energy, financial, engineering, healthcare, pharmaceutical,
education, law, and business.
I hope you enjoyed reading this article and finally, you came
to know about the Decision Tree algorithm.
For more such blogs/courses on data science, machine
learning, artificial intelligence and emerging new technologies do visit us at InsideAIML.