Ensemble Learning

Ready

What is Ensemble Learning?

Ensemble machine learning methods use multiple learning algorithms to obtain better predictive performance than any of the individual learning algorithms. Many of the popular modern machine learning algorithms are actually ensembles. For example, Random Forest and Gradient Boosting Machine (GBM) are both ensemble learners. Both bagging (e.g., Random Forest) and boosting (e.g., GBM) are methods for ensembling that take a collection of weak learners (e.g., decision tree) and form a single, strong learner.
The general principle of an ensemble method is to combine the predictions of several models built with a given learning algorithm in order to improve robustness over a single model. Bagging is a method in ensemble for improving unstable numeric estimation or classification schemes. While boosting methods are used sequentially to reduce the bias of the combined model. Boosting and Bagging both can reduce errors by reducing the variance term [1].

H2O's Stacked Ensemble method is a supervised ensemble machine learning algorithm that finds the optimal combination of a collection of prediction algorithms using a process called stacking. Like all supervised models in H2O, Stacked Ensemble supports regression, binary classification, and multiclass classification. If you would like to know more, make sure to check the Stacked Ensemble Section in the H2O-3 Documentation.

Resources


#h2o-3
#concepts
0 comments
0 Views
 

Permalink

Related Links

No Related Resource entered.