
As you can see from the trope above, before the optimum period, increasing the complexity of the model reduces the error ( bias ). however, after the optimum target, we see that the increase in the complexity of the machine learning model increases the variance.
Read more: Top 4 questionnaires for interview in 2022
Trade-off Of Bias And Variance: therefore, as we know that bias and variance, both are errors in machine teach models, it is identical substantive that any machine learning model has depleted discrepancy a well as a low bias so that it can achieve good operation.
Read more: Top 12 interview questionnaires in 2022
Let us see some examples. The K-Nearest Neighbor Algorithm is a good model of an algorithm with abject diagonal and high variance. This tradeoff can easily be reversed by increasing the potassium value which in turn results in increasing the number of neighbours. This, in become, results in increasing the bias and reducing the discrepancy. Another exercise can be the algorithm of a support vector machine. This algorithm besides has a high gear discrepancy and obviously, a depleted bias and we can reverse the tradeoff by increasing the value of parameter C. Thus, increasing the C parameter increases the bias and decreases the variance. so, the tradeoff is bare. If we increase the diagonal, the discrepancy will decrease and vice versa .