When you train a logistic model it learns the prior probability of the target class from the ratio of positive to negative examples in the training data. If the real world prior is not the same as your training data, this can lead to unexpected predictions from your model. Read this post to learn how to correct this even after the model has been trained!Read More
In this post we’ll explore how we can derive logistic regression from Bayes’ Theorem. Starting with Bayes’ Theorem we’ll work our way to computing the log odds of our problem and the arrive at the inverse logit function. After reading this post you’ll have a much stronger intuition for how logistic regression works!Read More
Kullback–Leibler divergence is a very useful way to measure the difference between two probability distributions. In this post we'll go over a simple example to help you better grasp this interesting tool from information theory.Read More
Your friends probably don't have a food allergy, but how sure are you?
How likely is it that your friends really have food allergies? More important should you believe them? In this post we look at using Bayes' theorem to model this everyday question.Read More