Logo

Machine learning

 

Loss Function:

A loss function in machine learning is a mathematical function that calculates the discrepancy between the output that was anticipated and the output that was actually produced.

By comparing the anticipated output to the actual output, it is used to evaluate how well a machine learning model is functioning.

The objective is to reduce the loss function's value, which shows that the model is successfully forecasting the output.

The following are some crucial ideas regarding loss functions in machine learning:

  1. To train machine learning models, loss functions are employed to reduce the discrepancy between expected and actual results.

  2. Regression and classification problems each require a different kind of loss function to be applied.

  3. Mean squared error, binary cross-entropy, and categorical cross-entropy are examples of common loss function types.

  4. The task at hand and the type of data being used both influence the loss function selection.

  5. The loss function is utilized by the optimization method that trains a machine learning model to update the model's parameters.

  6. The effectiveness of the model is also impacted by the optimization technique chosen, such as stochastic gradient descent or Adam.

A demonsration is shown below :

 

 

Codeblock E.1. Loss functions demonstration.

 

Mean Squared Error (MSE) Loss Function: In regression issues, it's utilized to quantify the discrepancy between expected and actual results. The squared disparities between the production that was anticipated and what actually happened make up the loss. Because of this, huge errors have a higher effect on the loss. It is susceptible to outliers.

In regression issues, the Mean Absolute Error (MAE) Loss Function is also used to quantify the discrepancy between expected and actual results. However, it uses the absolute difference between the expected and actual output rather than the squared difference. Compared to MSE, it is less susceptible to outliers.

In binary classification situations where the result is either 0 or 1, the binary cross-entropy loss function is applied. The logarithmic loss is used to calculate the discrepancy between the projected and actual output. It severely penalizes the model for predictions that are certain to be incorrect.

When there are numerous classes in the output of a multi-class classification issue, the categorical cross-entropy loss function is utilized. The output difference between the expected and actual output is measured using the logarithmic loss in this expansion of binary cross-entropy.

In binary classification issues where the output is either -1 or 1, the hinge loss function is employed. It penalizes the model for poor predictions and confidently wrong predictions and is frequently used with support vector machines (SVMs).

(KL) Kullback-Leibler The divergence loss function is a tool for comparing two probability distributions. The discrepancy between output probability distributions that were predicted and those that were actually produced is frequently measured in unsupervised learning.

You can download this Ipynb file from here :

 

Download

Download. Download the Loss functions.ipynb files used here.

 

 

---- Summary ----

Some frequently employed loss functions in machine learning are listed below::

  • The average of the squared discrepancies between the expected and actual values is measured by mean squared error (MSE). MSE is frequently used to solve regression issues.

  • The discrepancy between expected and actual values for binary classification issues is measured by binary cross-entropy.

  • For multi-class classification problems, categorical cross-entropy evaluates the discrepancy between predicted and actual values.

  • When training classifiers for binary classification problems, hinge loss is used.

  • Huber Loss is an MSE and MAE combination that is less susceptible to outliers.

  • (KL) Kullback-Leibler Divergence, which is frequently employed in generative models, measures the difference between two probability distributions.

  • L1 Loss (MAE) is a commonly used measure for regression issues that assesses the absolute differences between the anticipated and actual values.

  • A classification model's performance is measured by log loss (also known as binary cross-entropy), which is frequently applied to logistic regression models.

  • For multi-class classification tasks, the Softmax Loss evaluates the discrepancy between the expected and actual probabilities.

  • etc..


________________________________________________________________________________________________________________________________
Footer
________________________________________________________________________________________________________________________________

Copyright © 2022-2023. Anoop Johny. All Rights Reserved.