Bayesian Data Analysis

From Colettapedia
Revision as of 19:10, 25 September 2019 by Colettace (talk | contribs) (→‎Math)
Jump to navigation Jump to search

General

Typical Statistical Modelling Questions

  • What is the average difference between treatment groups?
  • How strong is association between treatment and outcome?
  • Does the effect of a treatment depend on a covariate?
  • How much variation is there between groups?

Compare vs. Frequentist

  • Naive Bayes youtube vid
  • Pros:
    • Easy and fast to predict a class of test dataset
    • Naive Bayes classifier performs better compared to other models assuming independence
    • Performs well in the case of categorical input variables compared to numerical variables
  • Cons
    • zero frequency (solved by smoothing techniques like laplace estimation, or adding 1 to avoid dividing by zero)
    • Bad estimator - probability estimates are understood to not be taken too seriously
    • Assumption of independent predictors, which is almost never the case.
  • Applications
    • Credit scoring
    • Medical
    • Real time prediction
    • Multi-class predictions
    • Text classification, spam filtering, sentiment analysis
    • recommendation filtering
  • Gaussian naive bayes: assume continuous data has Gaussian distribution
  • The multinomial naive Bayes classifier becomes a linear classifier when expressed in log-space

Motivation

  • Reasoning under uncertainty
  • Bayesian model makes the best use of the information in the data, assuming the small world is an accurate description of the real world.
  • Model is always an incomplete representation of the real world.
  • The small world of the model itself versus the large word in which we want to model to operate.
  • Small world - self contained and logical. No pure surprises.
  • Performance of model in large world has to be demonstrated rather than logically deduced.
    • simulating new data from the model is a useful part of model criticism.
  • In contrast animals use heuristics that take adaptive shortcuts and may outpuerform rigorous bayesian analysis once costs of information gathering and processing are taken into account Once you already know what information is useful, being fully Bayesian is a waste.

Description

  • Bayesian data analysis - producing a story for how the data (observations) came to be.
  • Bayesian inference = counting and comparing the ways things can happen/possibilities.
  • In order to make good inference on what actually happened, it helps to consider everything that could have happened.
  • A quantitative ranking of hypotheses. Counting paths is a measure of relative plausibility
  • Prior information: instead of building up a possibility tree from scratch given a new observation, it is mathematically equivalent to multiply the prior counts by the new count for each conjecture IF the new observation is logically independent of the previous observations.
    • Multiplication is just a shortcut to enumerating and counting up all the paths through the garden of possibilities
    • A.k.A., joint probability distribution
  • Principle of indifference - when there's no reason to say that one conjecture is more reasonable than the other
  • The probability of rain and cold both happening on a given day is equal to (probability of rain when it's cold) times (probability that it's cold)

Definitions

  • Parameter - Represents different conjecture. A way of indexing the possible explanations of the data. A Bayesian machine's job is to describe what the data tells us about an unknown parameter.
  • Liklihood - the relative number of ways that parameter of a given value can produce the observed data.
  • Prior probability - prior plausibility. Engineering assumptions chosen to help the machine learn.
    • regularizing prior, weakly informative prior: Flat prior is common but hardly the best prior. Priors that gently nudge the machine usually improve inference.
    • Penalized liklihood - constrain parameters to reasonable ranges. Values of p=0 and p=1 are highly implausible
    • Subjective bayesian - used in philosophy and economics, rarely used in natural and social sciences.
    • Alter the prior to see how sensitive inference is to that assumption of the prior.
  • posterior probability - updated plausibility
  • posterior distribution relative plausibility of different parameter estimates conditional on the data.
  • Randomization - processing something so we know almost nothing about its arrangement. A truly randomized deck of cards will have an ordering that has high information entropy.
  • A story for how your observed data came to be may be descriptive or causal. Sufficient for specifying an algorithm for simulating new data.

Math

  • Average likelihood of the data - Averaged over the prior. It's job is to standardize the posterior so that it sums (integrates) to 1.
  • In practice there's is only interest in the numerator of that fraction, because the denominator does not depend on C, and the values on feature are given, so the denominator is effectively constant.
  • The numerator is equivalent to the joint probability model
  • If we assume each feature is conditionally independent of every other, then the joint model can be expressed as

  • Classifier combines probability model with a decision rule, i.e. maximum a posteriori

Conditional probability

  • What is the probability that a given observation D belongs to a given class C,
  • "The probability of A under the condition B"
  • There need not be a causal relationship
  • Compare with UNconditional probability
  • If , then events are independent, knowledge about either event does not give information on the other. Otherwise,
  • Don't falsely equate and
  • Defined as the quotient of the joint of events A and B and the probability of B: , where numerator is the probability that both events A and B occur.
  • Joint probability

Bayesian Network

  • Bayesian network is way to reduce size of representation, a "succinct way" of representing distribution
  • store probability distribution explicitly in a table
  • x1 .. x10 are booleans
  • what is size of table for set of vars P[ x1 ... x10] = 2^n
  • how can rewrite joint pdf P[x1, x2, ..., x10]= P[x1| x2, ..., x10] * P[x2, ..., x10]
  • = P[x1| x2, ..., x10] * P[x2 | x3, ..., x10] ... P[Xn-1|Xn]*P[Xn]
  • P[Xi|Xi+1, ..., Xn] = P[Xi] if Xi is totally independent of the others
  • sometime can also be conditionally independent, only dependent on a subset of the other variables
  • the variable on which P[Xi] depends "subsumes" the other variables
  • belief network - order of variables matters when setting up dependencies in belief network.
  • Count parents of each node to figure out size of conditional probability tables
  • If use improper ordering, results in valid representation of joint probabilty funtion, but would require producing conditional probability tables which aren't natural/difficult to obtain experimentally. could also result in inflation of conditional tables / size of table representation is large compared to others

Incremental Network Construction

  1. Choose the set of relevant set of variables X that describe the domain
  2. Choose an ordering for the variables (very important step)
  3. While there are variables left:
    1. dequeue variable X off the queue and add node
    2. Set Parents(X) to some minimal set of existing of existing nodes such that the conditional independence is satisfied
    3. Define the conditional probability table

inferences using belief networks

  • diagnostic inferences (from effects to causes
  • causal inferences (given symptoms, what is probability of disease)
  • intercausal inferences
  • mixed inferences