P(purple/giraffe)

There is a disconnect when we think about probability and thought. To think that not only do we calculate the probability of events, but that we do it many times a day subconsciously can be quite suspect. Still, this idea makes sense. If we see that the ground is wet we immediately deduct that something made it wet. We then take the information of our environment to guess the cause. If the sun is shining there is a strong chance that it did not rain. If we see a sprinkler close by there is a strong chance that it could have gotten the ground wet. Even though we do not formally think P(sprinkler/wet) = (P(wet)*P(wet/sprinkler))/P(sprinkler) this formula does yield similar results. It seems that we do something close enough so that this model accurately reflects (or at least describes) some of our thought process. This is evident in today’s expert systems, some machine learning algorithms, and computer vision. What interest me more, however, is how we arrive at these prior probabilities, and how often we change them.

The calculation of some probabilities seem straight forward. If it is raining things will get wet would imply that P(wet/rain)=1. Yet, if something is blocking the rain (like a roof or a tarp) this is not true. The condition of wet remains the same, but P(wet) is either dynamically calculated or we do not calculate P(rain/wet) but instead P((rain/wet)/tarp). It seems to me that something other than just Bayes theorem is being calculated. Do we calculate heuristics?  Do we repeatedly nest bayes rule? And how often must we recalculate p(some event)? If we see a purple giraffe after seeing 100 yellow ones do we change p(purple/giraffe) to ½? Or do we treat this a a rare event and still conclude that most (99%) of giraffes are yellow? How many anomalous events would we have to see to change this? How do we decide how much to change this number by? If it directly effects our well being do we change our probabilities more?