Decision TreesGoing Beyond A Single Tree
If one highly-sensitive Decision Tree is prone to wild, unpredictable changes, how do we fix the problem?
Ironically, the solution is to introduce even more randomness.
Instead of relying on the prediction of one massive, overfitted tree, what if we trained a large collection of perfectly pruned Decision Trees on slightly different, randomized subsets of the data? When we need to make a prediction, we just ask all the trees to vote on the answer!
Because they are voting as a group, they no longer suffer from high variance or instability. This brilliant approach opens the door to arguably the most famous and successful algorithm in classical Machine Learning: