#### World's Best AI Learning Platform with profoundly Demanding Certification Programs

Designed by IITian's, only for AI Learners.

Download our e-book of Introduction To Python

What is use of Heat map ? How to plot heat map? Why ReLU is used only on hidden layers specifically? Why is my f1_scores different when i calculate them manually vs output by sklearn.metrics Open a text file and find the longest word in the text file and find the length. What is the difference between a module and a package in Python? How to plot Bubble plot with Encircling? Balanced Parentheses Check problem Which sorting technique is used by sort() and sorted() functions of python? Join Discussion

4 (4,001 Ratings)

220 Learners

Kajal Pawar

a year ago

Activation functions are a very important component of
neural networks in deep learning. It helps us to determine the output of a deep
learning model, its accuracy, and also the computational efficiency of training
a model. They also have a major effect on how the neural networks will converge
and what will be the convergence speed. In some cases, the activation functions
might also prevent neural networks from convergence. So, let’s understand the activation functions**,**** types of activation functions **&** their importance and limitations in details.**

Activation functions help us to determine the output
of a neural network. These types of functions are attached to each neuron in
the neural network, and determines whether it should be activated or not, based
on whether each neuron’s input is relevant for the model’s prediction.Activation function also helps us
to normalize the output of each neuron to a range between** 1** and **0 **or
between **-1** and **1.**As we know, sometimes the neural
network is trained on millions of data points, So the activation function must
be efficient enough that it should be capable of reducing the computation time
and improve performance.

In a neural network, inputs are fed into the neuron in
the input layer. Where each neuron has a weight and multiplying the input
number with the weight of each neurons gives the output of the neurons, which
is then transferred to the next layer and this process continues. The output
can be represented as:**
****Y = ∑
(weights*input + bias) **Note: The range
of Y can be
in between -infinity to +infinity. So, to bring the
output into our desired prediction or generalized results we
have to pass this value from an activation function.The activation function is a type of mathematical
“gate” in between the input feeding the current neuron and its output going to
the next layer. It can be as simple as a step function that turns the neuron
output **on** and **off**, depending on a rule or threshold what is
provided. The final output can be represented as shown below: ** Y = Activation function(∑
(weights*input + bias))**

We're Online!

Chat now for any query