#### Machine Learning with Python & Statistics

4 (4,001 Ratings)

218 Learners

More webinars

# Python Deep Basic Machine Learning

Neha Kumawat

a year ago

Artificial Intelligence (AI) is any code, algorithm or technique that enables a computer to mimic human cognitive behaviour or intelligence. Machine Learning (ML) is a subset of AI that uses statistical methods to enable machines to learn and improve with experience. Deep Learning is a subset of Machine Learning, which makes the computation of multi-layer neural networks feasible. Machine Learning is seen as shallow learning while Deep Learning is seen as hierarchical learning with abstraction.
Machine learning deals with a wide range of concepts. The concepts are listed below −
• supervised
• unsupervised
• reinforcement learning
• linear regression
• cost functions
• overfitting
• under-fitting
• hyper-parameter, etc.
• unsupervised
• reinforcement learning
• linear regression
• cost functions
• overfitting
• under-fitting
• hyper-parameter, etc.
• reinforcement learning
• linear regression
• cost functions
• overfitting
• under-fitting
• hyper-parameter, etc.
• linear regression
• cost functions
• overfitting
• under-fitting
• hyper-parameter, etc.
• cost functions
• overfitting
• under-fitting
• hyper-parameter, etc.
• overfitting
• under-fitting
• hyper-parameter, etc.
• under-fitting
• hyper-parameter, etc.
• hyper-parameter, etc.
In supervised learning, we learn to predict values from labelled data. One ML technique that helps here is classification, where target values are discrete values; for example,cats and dogs. Another technique in machine learning that could come of help is regression. Regression works onthe target values. The target values are continuous values; for example, the stock market data can be analysed using Regression.
In unsupervised learning, we make inferences from the input data that is not labelled or structured. If we have a million medical records and we have to make sense of it, find the underlying structure, outliers or detect anomalies, we use clustering technique to divide data into broad clusters.
Data sets are divided into training sets, testing sets, validation sets and so on.
A breakthrough in 2012 brought the concept of Deep Learning into prominence. An algorithm classified 1 million images into 1000 categories successfully using 2 GPUs and latest technologies like Big Data.

## Relating Deep Learning and Traditional Machine Learning

One of the major challenges encountered in traditional machine learning models is a process called feature extraction. The programmer needs to be specific and tell the computer the features to be looked out for. These features will help in making decisions.
Entering raw data into the algorithm rarely works, so feature extraction is a critical part of the traditional machine learning workflow.
This places a huge responsibility on the programmer, and the algorithm's efficiency relies heavily on how inventive the programmer is. For complex problems such as object recognition or handwriting recognition, this is a huge issue.
Deep learning, with the ability to learn multiple layers of representation, is one of the few methods that has help us with automatic feature extraction. The lower layers can be assumed to be performing automatic feature extraction, requiring little or no guidance from the programmer.