- Advertisement -

- Advertisement -

Building a multiple linear regression model and assumptions of linear regression: A-Z

0 12

Get real time updates directly on you device, subscribe now.

- Advertisement -

Let us a take a break from model building here and understand first few things which will help us to judge how good is the model which we have built

After building our multiple regression model let us move onto a very crucial step before making any predictions using out model. There are some assumptions that need to be taken care of before implementing a regression model. All of these assumptions must hold true before you start building your linear regression model.

- Advertisement -

Related image
  • Assumption 1: Relationship between your independent and dependent variables should always be linear i.e. you can depict a relationship between two variables with help of a straight line. Let us check whether this assumption holds true or not for our case.

By having a look on the second row and second column, we can say that our independent variables posses linear relationship(observe how scatterplots are more or less giving a shape of a line) with our dependent variable(cab price in our case)

  • Assumption 2: Mean of residuals should be zero or close to 0 as much possible. It is done to check whether our line is actually the line of “best fit”.

We want the arithmetic sum of these residuals to be as much equal to zero as possible as that will make sure that our predicted cab price is as close to actual cab price as possible. In our case, mean of the residuals is also very close to 0 hence the second assumption also holds true.

> mean(regressor$residuals)
[1] 6.245005e-17
  • Assumption 3: There should be homoscedasticity or equal variance in our regression model. This assumption means that the variance around the regression line is the same for all values of the predictor variable (X). The sample plot below shows a violation of this assumption. For the lower values on the X-axis, the points are all very near the regression line. For the higher values on the X-axis, there is much more variability around the regression line.

To check homoscedasticity, we make a plot of residual values on the y-axis and the predicted values on the x-axis. If we see a bell curve, then we can say that there is no homoscedasticity. It means that variability of a variable is unequal across the range of values of a second variable that predicts it.

par(mfrow=c(2,2)) # set 2 rows and 2 column plot layout
plot(regressor)

From the first plot (top-left), as the fitted values along x increase, the residuals remain more or less constant. This pattern is indicated by the red line, which should be approximately flat if the disturbances are homoscedastic. The plot on the bottom left also checks this, and is more convenient as the disturbance term in Y axis is standardised. The points appear random and the line looks pretty flat(top-left graph), with no increasing or decreasing trend. So, the condition of homoscedasticity can be accepted.

  • Assumption 4 : All the dependent variables and residuals should be uncorrelated.
cor.test(training_set$Months, regressor$residuals)

Here, our null hypothesis is that there is no relationship between our independent variable Months and the residuals while the alternate hypothesis will be that there is a relationship between Months and residuals. Since p-value is very high, we can not reject the null hypothesis and hence our assumption holds true for this variable for this model. We need to repeat this step for other independent variables also.

  • Assumption 5 : Number of observations should be greater than number of independent variables. We can check this by directly looking at our data set only.
  • Assumption 6: There should be no perfect multicollinearity in your model. Multicollinearity generally occurs when there are high correlations between two or more independent variables. In other words, one independent variable can be used to predict the other. This creates redundant information, skewing the results in a regression model. We can check multicollinearity using VIF(variance inflation factor). Higher the VIF for an independent variable, more is the chance that variable is already explained by other independent variables.
  • Assumption 7: Residuals should be normally distributed. This can be checked by visualizing Q-Q Normal plot. If points lie exactly on the line, it is perfectly normal distribution. However, some deviation is to be expected, particularly near the ends (note the upper right), but the deviations should be small, even lesser than they are here.

Checking assumptions automatically

We can use gvlma library to evaluate basic assumptions of linear regression for us automatically.

library(gvlma)
regressor = lm(formula = training_set$Cab.Price ~ training_set$Demand + training_set$Fuel.Price +training_set$Vehicle.Price+training_set$Profit.by.driver,
data = training_set)
gvlma(regressor)

Are you also like this on observing the output of gvlma? Let us understand this output in detail.

1. Global Stat: It measures the linear relationship between our independent variables and the dependent variable which we are trying to predict. The null hypothesis is that there is a linear relationship between our independent and dependent variables. Since our p-value >0.05, we can easily accept our null hypothesis and conclude that there is indeed a linear relationship between our independent and dependent variables.

2. Skewness: Data can be “skewed”, meaning it tends to have a long tail on one side or the other

  • Negative Skew: The long tail is on the “negative” (left) side of the peak. Generally, mean < median < mode in this case.
  • No Skew: There is no observed tail on any side of the peak. It is always symmetrical. Mean, median and mode are at the centre of the peak i.e. mean = m0edian = mode
  • Positive Skew: The long tail is on the “positive” (right) side of the peak. Generally, mode < median < mean in this case.

We want our data to be normally distributed. The second assumption looks for skewness in our data. The null hypothesis states that our data is normally distributed. In our case, since p-value for this is >0.05, we can safely conclude that our null hypothesis holds hence our data is normally distributed.

3. Kurtosis : The kurtosis parameter is a measure of the combined weight of the tails relative to the rest of the distribution. It measures the tail-heaviness of the distribution. Although not correct, you can indirectly relate kurtosis to shape of the peak of normal distribution i.e. whether your normal distribution as a sharp peak or a shallow peak.

  • Mesokurtic: This is generally the ideal scenario where your data has a normal distribution.
  • Platykurtic(negative kurtosis score): A flatter peak is observed in this case because there is fewer data in your data set which resides in the tail of the distribution i.e. tails are thinner as compared to the normal distribution. It has shallower peak than normal which means that this distribution has thicker tails and that there are fewer chances of extreme outcomes compared to a normal distribution.
  • Leptokurtic(positive kurtosis score): A sharper peak is observed as compared to normal distribution because there is more data in your data set which resides in the tail of the distribution as compared to the normal distribution. It has more peak than normal which means that this distribution has fatter tails and that there are more chances of extreme outcomes compared to a normal distribution.

We want our data to be normally distributed. The third assumption looks for the amount of data present in the tail of the distribution. The null hypothesis states that our data is normally distributed. In our case, since p-value for this is >0.05, we can safely conclude that our null hypothesis holds hence our data is normally distributed.

4. Link function: It tells us whether our dependent function is numeric or categorical. As I have already mentioned before, for linear regression your dependent variable should be numeric and not categorical. The null hypothesis states that our dependent variable is numeric. Since p-value for this case is again > 0.05, our null hypothesis holds true hence we can conclude that our dependent variable is numeric.

5. Heteroscedasticity: Is the variance of your model residuals constant across the range of X (assumption of homoskedasticity(discussed above in assumptions))? Rejection of the null (p < .05) indicates that your residuals are heteroscedastic, and thus non-constant across the range of X. Your model is better/worse at predicting for certain ranges of your X scales.

As we can observe, gvlma has automatically tested our model for 5 basic assumptions in linear regression and woohoo, our model has passed all the basic assumptions of linear regression and hence is a qualified model to predict results and understand the influence of independent variables (predictor variables) on your dependent variable

Understanding R squared and Adjusted R squared

R-squared is a statistical measure of how close the data are to the fitted regression line. It is also known as the coefficient of determination, or the coefficient of multiple determination for multiple regression. It can also be defined as the percentage of the response variable variation that is explained by a linear model.

R-squared = Explained variation / Total variation
R-squared is always between 0 and 100%:
0% indicates that the model explains none of the variability of the response data around its mean.
100% indicates that the model explains all the variability of the response data around its mean.

In general, the higher the R-squared, the better the model fits your data.
But R-squared cannot determine whether the coefficient estimates and predictions are biased, which is why we must assess the residual plots. However, R-squared has additional problems that the adjusted R-squared and predicted R-squared is designed to address.

Problem 1 : Whenever we add a new independent variable(predictor) to our model, R-squared value always increases. It will not depend upon whether the new predictor variable holds much significance in the prediction or not. We may get lured to increase our R squared value as much possible by adding new predictors, but we may not realize that we end up adding a lot of complexity to our model which will make it difficult to interpret.

Problem 2 :If a model has too many predictors and higher order polynomials, it begins to model the random noise in the data. This condition is known as over-fitting and it produces misleadingly high R-squared values and a lessened ability to make predictions.

How adjusted R squared comes to our rescue

We will now have a look how adjusted r squared deals with the shortcomings of r squared. To begin with, I will say that adjusted r squared modified version of R-squared that has been adjusted for the number of predictors in the model. The adjusted R-squared increases only if the new term improves the model more than would be expected by chance. It decreases when a predictor improves the model by less than expected by chance. The adjusted R-squared can be negative, but it’s usually not. It is always lower than the R-squared.

Let us understand adjusted R squared in more detail by going through its mathematical formula.

where n = number of points in our data set
k = number of independent variables(predictors) used to develop the model

From the formula, we can observe that, if we keep on adding too much of non-significant predictors then k will tend to increase while R squared won’t increase so much. Hence n-k-1 will decrease and the numerator will be almost same as before. So then entire fraction will increase(since denominator has decreased with increased k). This will result in a bigger value being subtracted from 1 and we end up with a smaller adjusted r squared value. Hence this way, adjusted R squared compensates by penalising us for those extra variables which do not hold much significance in predicting our target variable.

- Advertisement -

Get real time updates directly on you device, subscribe now.

- Advertisement -

- Advertisement -

Leave A Reply

Your email address will not be published.

x

We use cookies to give you the best online experience. By agreeing you accept the use of cookies in accordance with our cookie policy.

I accept I decline