Gradient boosting (derived from the term gradient boosting machines) is a popular supervised machine learning technique for regression and classification problems that aggregates an ensemble of weak individual models to obtain a more accurate final model.
Gradient boosting is a unique ensemble method since it involves identifying the shortcomings of weak models and incrementally or sequentially building a final ensemble model using a loss function that is optimized with gradient descent. Decision trees are typically the weak learners in gradient boosting and consequently, the technique is sometimes referred to as gradient tree boosting.
​XGBoost is a very popular gradient boosting framework that is fast, uses some clever tricks to obtain more accurate results, and is easy to parallelize.