Top Data Science Interview Questions and Answers(2022) – InterviewBit

Let us first understand the mean of bias and division in contingent : Bias: It is a kind of mistake in a machine learning model when an ML Algorithm is oversimplified. When a model is trained, at that time it makes simplify assumptions so that it can easily understand the target function. Some algorithms that have broken bias are Decision Trees, SVM, etc. On the other hand, logistic and linear regression algorithms are the ones with a high bias. Variance: Variance is besides a kind of mistake. It is introduced into an ML Model when an ML algorithm is made highly complex. This exemplar besides learns make noise from the data determined that is meant for train. It further performs badly on the test datum fit. This may lead to over lift a well as high sensitivity. When the complexity of a model is increased, a decrease in the error is seen. This is caused by the lower bias in the model. But, this does not happen constantly till we reach a particular point called the optimum point. After this bespeak, if we keep on increasing the complexity of the model, it will be over revoke and will suffer from the trouble of high variability. We can represent this site with the serve of a graph as shown below :

As you can see from the trope above, before the optimum period, increasing the complexity of the model reduces the error ( bias ). however, after the optimum target, we see that the increase in the complexity of the machine learning model increases the variance.

Trade-off Of Bias And Variance: therefore, as we know that bias and variance, both are errors in machine teach models, it is identical substantive that any machine learning model has depleted discrepancy a well as a low bias so that it can achieve good operation.

Let us see some examples. The K-Nearest Neighbor Algorithm is a good model of an algorithm with abject diagonal and high variance. This tradeoff can easily be reversed by increasing the potassium value which in turn results in increasing the number of neighbours. This, in become, results in increasing the bias and reducing the discrepancy. Another exercise can be the algorithm of a support vector machine. This algorithm besides has a high gear discrepancy and obviously, a depleted bias and we can reverse the tradeoff by increasing the value of parameter C. Thus, increasing the C parameter increases the bias and decreases the variance. so, the tradeoff is bare. If we increase the diagonal, the discrepancy will decrease and vice versa .

beginning : https://gauday.com
Category : interview

We will be happy to hear your thoughts

Leave a reply

GauDay Crypto news and market tracking in real time
Logo
Enable registration in settings - general