# Root-Mean-Square Error (RMSE) | Machine Learning

Root-Mean-Square Error (RMSE): In this article, we are going to learn one of the methods to determine the accuracy of our model in predicting the target values.
Submitted by Raunak Goswami, on August 16, 2018

Hello learners, welcome to yet another article on machine learning. Today we would be looking at one of the methods to determine the accuracy of our model in predicting the target values. All of you reading this article must have heard about the term RMS i.e. Root Mean Square and you might have also used RMS values in statistics as well. In machine Learning when we want to look at the accuracy of our model we take the root mean square of the error that has occurred between the test values and the predicted values mathematically:

For a single value:

```    Let a= (predicted value- actual value) ^2
Let b= mean of a = a (for single value)
Then RMSE= square root of b
```

For a wide set of values RMSE is defined as follows: Graphically: As you can see in this scattered graph the red dots are the actual values and the blue line is the set of predicted values drawn by our model. Here X represents the distance between the actual value and the predicted line this line represents the error, similarly, we can draw straight lines from each red dot to the blue line. Taking mean of all those distances and squaring them and finally taking the root will give us RMSE of our model.

Let us write a python code to find out RMSE values of our model. We would be predicting the brain weight of the users. We would be using linear regression to train our model, the data set used in my code can be downloaded from here: headbrain6

Python code:

```# -*- coding: utf-8 -*-
"""
Created on Sun Jul 29 22:21:12 2018

@author: Raunak Goswami
"""
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

"""
here the directory of my code and the headbrain6.csv file
is same make sure both the files are stored in same folder or directory
"""
x=data.iloc[:,2:3].values
y=data.iloc[:,3:4].values

#splitting the data into training and test
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=1/4,random_state=0)

#fitting simple linear regression to the training set
from sklearn.linear_model import LinearRegression
regressor=LinearRegression()
regressor.fit(x_train,y_train)

#predict the test result
y_pred=regressor.predict(x_test)

#to see the relationship between the training data values
plt.scatter(x_train,y_train,c='red')
plt.show()

#to see the relationship between the predicted
#brain weight values using scattered graph
plt.plot(x_test,y_pred)
plt.scatter(x_test,y_test,c='red')
plt.ylabel('brain weight')

#errorin each value
for i in range(0,60):
print("Error in value number",i,(y_test[i]-y_pred[i]))
time.sleep(1)

#combined rmse value
mse=np.mean((y_test-y_pred)**2)
print("Final rmse value is =",np.sqrt(np.mean((y_test-y_pred)**2)))
```

Outputs:   The RMSE value of our is coming out to be approximately 73 which is not bad. A good model should have an RMSE value less than 180. In case you have a higher RMSE value, this would mean that you probably need to change your feature or probably you need to tweak your hyperparameters. In case you want to know how did the model predicted the values, just have a look at my previous article on linear regression.