27-09-2023 11:17 AM

The goal of a machine learning regression problem is to predict a single numeric value. For example, you might want to predict the price of a house based on its square footage, number of bedrooms, local tax rate and so on.

There are roughly a dozen major regression techniques, and each technique has several variations. Among the most common techniques are linear regression, linear ridge regression, k-nearest neighbors regression, kernel ridge regression, Gaussian process regression, decision tree regression and neural network regression. This article explains how to implement linear ridge regression from scratch, using the C# language. Linear ridge regression (LRR) is a relatively simple variation of standard linear regression. However, LRR and standard linear regression are usually considered distinct techniques, in part because they are often applied to different types of problems. LRR is typically used for machine learning scenarios where there is training and test data. A good way to see where this article is headed is to take a look at the screenshot of a demo program in Figure 1. The demo program uses synthetic data that was generated by a 3-10-1 neural network with random weights. There are three predictor variables that have values between -1.0 and +1.0, and a target variable to predict, which is a value between 0.0 and 1.0. There are 40 training items and 10 test items. The demo program creates a linear ridge regression model using the training data. The model uses a parameter named alpha, which is set to 0.05. The alpha value is the “ridge” part of “linear ridge regression.” The alpha parameter is used behind the scenes to create L2 regularization, which is a standard machine learning technique to prevent model overfitting. The value of alpha must be determined by trial and error.