"Train" and "Test" sets in Data Science

0 votes
I'm new to Data Science and trying to understand why is the entire dataset divided into a "Train" and "Test" set while building a machine learning algorithm on top of it. Why can't i just build my algorithm on top of the entire dataset?
Mar 26, 2018 in Data Analytics by kurt_cobain
• 9,280 points

2 answers to this question.

0 votes
Whenever we are building a supervised learning algorithm on top of a data-set, it is important to divide the entire data-set into the "Train" and "Test" sets.

The algorithm is built on top of the "Train" set, so that it learns all the features associated with the "Train" set.

Once the algorithm learns all the features associated with the "Train" set, we will check for it's accuracy with the test set.

If we build the supervised learning algorithm such as a classification model or a regression model on top of the entire data-set, the model will be "over-fitted" and hence it will fail when it is given new data.

Thus, to increase the adaptability of the model, the entire data is divided into "Train" and "Test" sets.
answered Mar 26, 2018 by Bharani
• 4,560 points
0 votes

Normally to perform supervised learning you need two types of data sets:

  1. In one dataset (your "gold standard") you have the input data together with correct/expected output, This dataset is usually duly prepared either by humans or by collecting some data in semi-automated way. But it is important that you have the expected output for every data row here, because you need this for supervised learning.

  2. The data you are going to apply your model to. In many cases this is the data in which you are interested for the output of your model and thus you don't have any "expected" output here yet.

While performing machine learning you do the following:

  1. Training phase: you present your data from your "gold standard" and train your model, by pairing the input with expected output.
  2. Validation/Test phase: in order to estimate how well your model has been trained (that is dependent upon the size of your data, the value you would like to predict, input etc) and to estimate model properties (mean error for numeric predictors, classification errors for classifiers, recall and precision for IR-models etc.)
  3. Application phase: now you apply your freshly-developed model to the real-world data and get the results. Since you normally don't have any reference value in this type of data (otherwise, why would you need your model?), you can only speculate about the quality of your model output using the results of your validation phase.

The validation phase is often split into two parts:

  1. In the first part you just look at your models and select the best performing approach using the validation data (=validation)
  2. Then you estimate the accuracy of the selected approach (=test).

Hence the separation to 50/25/25.

In case if you don't need to choose an appropriate model from several rivaling approaches, you can just re-partition your set that you basically have only training set and test set, without performing the validation of your trained model. I personally partition them 70/30 then.

answered Aug 2, 2018 by Anmol
• 3,620 points

Related Questions In Data Analytics

0 votes
1 answer

Create train and test data from dataset in R

Hi, Use sample or sample.split function to create ...READ MORE

answered Oct 13 in Data Analytics by anonymous
• 32,260 points
0 votes
1 answer

R query and Data Science

Dear Deepika, Hope you are doing great. You can ...READ MORE

answered Dec 17, 2017 in Data Analytics by Sudhir
• 1,610 points
0 votes
1 answer

How to filter a data frame with dplyr and tidy evaluation in R?

Requires the use of map_df to run each model, ...READ MORE

answered May 16, 2018 in Data Analytics by DataKing99
• 8,130 points
0 votes
1 answer

How to forecast season and trend of data using STL and ARIMA in R?

You can use the forecast.stl function for the ...READ MORE

answered May 18, 2018 in Data Analytics by DataKing99
• 8,130 points
0 votes
2 answers

What is difference between Distributed search head and Search head cluster?

 A distributed environment describes the separation of ...READ MORE

answered Dec 3, 2018 in Data Analytics by Ali
• 10,440 points
0 votes
2 answers

Installing MXNet for R in Windows System

You can install it for python in ...READ MORE

answered Dec 3, 2018 in Data Analytics by Kalgi
• 45,780 points
+1 vote
3 answers

Problem with installation of Wordcloud in anaconda

Using Anaconda Python 3.6 version For Windows ...READ MORE

answered Aug 7, 2018 in Data Analytics by Priyaj
• 56,960 points
0 votes
2 answers

Transforming a key/value string into distinct rows in R

We would start off by loading the ...READ MORE

answered Mar 26, 2018 in Data Analytics by Bharani
• 4,560 points
0 votes
2 answers

Splitting the data into training and testing sets - R

Hi, Try like this. train = sample(x = ...READ MORE

answered Aug 20 in Data Analytics by anonymous
• 32,260 points
+1 vote
2 answers