Member-only story

Key Concepts of Machine Learning | Day (2/45) | A2Z ML | Mohd Saqib

Mohd Saqib
13 min readJan 16, 2023

Read my previous blog if you have not covered yet — Prev

ref

Day 2

Index:
- Variance vs Bias
- Overfitting and Under-fitting
- Regularization
- Parameter vs Hyperparameter

ref

Variance vs Bias

In machine learning, variance and bias are two concepts that describe the errors that can occur in a model.

Bias refers to the error that is introduced by approximating a real-world problem, which may be extremely complex, by a simpler model. A model with high bias pays little attention to the training data and oversimplifies the problem, resulting in underfitting. A high bias model will have a high training error.

Variance, on the other hand, refers to the error that is introduced by the model’s sensitivity to small fluctuations in the training data. A model with high variance pays a lot of attention to the training data and is too closely fit to the training data, resulting in overfitting. A high variance model will have a high test error and a low…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Mohd Saqib
Mohd Saqib

Written by Mohd Saqib

Scholar @ McGill University, Canada | IIT (ISM) | AMU | Travel | msaqib.cs@gmail.com

No responses yet

Write a response