Matthew Francis, ars technica, Bayes’ theorem: Its triumphs and discontents, here.
Bayes’ theorem in essence states that the probability of a given hypothesis depends both on the current data and prior knowledge. In the case of the 2012 United States election, Silver used successive polls from various sources as priors to refine his probability estimates. (In other words, saying he “predicted” the outcome of the election is slightly misleading: he calculated which candidate was most likely to win in each state based on the polling data.) In other cases, priors could be the outcome of earlier experiments or even educated assumptions drawn from experience. The wise statistician or scientist constructs priors that are informative, but that isn’t always easy to do.
Noah Smith, Noahpinion, What is Derp, Answer is technical, here.
Bayesian probability basically says that “probability” is, to some degree, subjective. It’s your best guess for how likely something is. But to be Bayesian, your “best guess” must take the observable evidence into account. Updating your beliefs by looking at the outside world is called “Bayesian inference“. Your initial guess about the probability is called your “prior belief”, or just your “prior” for short. Your final guess, after you look at the evidence, is called your “posterior.” The observable evidence is what changes your prior into your posterior.
How much does the evidence change your belief? That depends on three things. It depends on A) how different the evidence is from your prior, B) how strong the evidence is, and C) how strong your prior is.
What does it mean for a prior to be “strong”? It means you really, really believe something to be true. If your start off with a very strong prior, even solid evidence to the contrary won’t change your mind. In other words, your posterior will come directly from your prior. (And where do priors come from? On this, Bayesian theory is silent. Let’s assume they come directly from your…um…posterior.)