Published on Feb 22,2017
2K Views
Email Post

Random Forest Classifier

Random forests are an ensemble learning method for classification (and regression) that operate by constructing a multitude of decision trees at training time.

Developed by Adele Cutler and Leo Breiman, the method combines Leo’s bagging idea and the random selection of features, introduced by Tin Kan Ho, who initially proposed Random Decision forest in the year 1995.

What are Ensemble Models?

Ensemble models combine results from different models. The result from this model is usually better than the result from one of the individual models.

Some of the features of Random Forests are as follows:

  • It is unexcelled in accuracy among current algorithms.
  • It runs efficiently on large data bases.
  • It can handle thousands of input variables without variable  deletion.
  • It gives estimates of which variables are important in the classification.
  • It generates an internal unbiased estimate of the generalization error as the forest building progresses.

Functions of Random Forest

Each tree is grown if the number of cases in the training set is N , and the sample N case is at random. This sample will be the training set for growing the tree. If there are M input variables, a number m<<M is specified such that at each node, m variables are selected at random out of the M. The value of m is held constant during the forest growing and each tree grows to the largest extent possible.

Got a question for us? Mention them in the comments section and we will get back to you.

Related Posts:

Association Rule Mining with Data Science 

What is Data Science

Get Started with Data Science

About Author
edureka
Published on Feb 22,2017

Share on

Browse Categories

Comments
1 Comment