Qualitative and quantitative priors
Expert domain knowledge is usually qualitative, not quantitative, and thus, eliciting probability numbers from experts is usually very difficult. One of the core goals of the KI-Learn project is to use qualitative knowledge, and we attempt this with a language whose qualitative statements
So here is my question: is it ever possible to specify purely qualitative domain knowledge? I suppose the answer is: only if you assume the knowledge is true with probability 1 (which of course is simply making the quantitative part implicit). This is nasty, though. Nobody wants to state something is true with probability 1, but nobody wants to specify probabilities, either. Is there any alternative to picking the lesser of these two evils? It seems the answer is no...?
- are easy and natural to write by domain experts
- have well-defined semantics for probability distributions which correspond to experts' intuitions
So here is my question: is it ever possible to specify purely qualitative domain knowledge? I suppose the answer is: only if you assume the knowledge is true with probability 1 (which of course is simply making the quantitative part implicit). This is nasty, though. Nobody wants to state something is true with probability 1, but nobody wants to specify probabilities, either. Is there any alternative to picking the lesser of these two evils? It seems the answer is no...?
2 Comments:
A lesser evil could be to view expert's true prior distributions as the unknown, and their qualitative statements as noisy observations.
You'll need to define 1) likelihood, ie encode things like "Expert believes in distribution X. What's the probability they'll call it monotonic?", and 2) a prior over possible expert priors. Then you can use Bayesian approach to convert qualitative knowledge into quantitative prior.
Oakley and O'Hagan do something like that in "Uncertainty in Prior Elicitations: a Nonparametric Approach". For prior over possible expert priors they use a Gaussian process. It encodes their belief that expert's prior is reasonably smooth. For the likelihood function they just use normals (ie, experts provide numerical estimates of probability, which are normally distributed around their true beliefs) Obviously this likelihood function only works for statements of the form "I believe p1=0.6"
I think an important question is how to evaluate those techniques objectively. Real life testing on experts would be the most relevant, but might be too expensive. Could one idealize the situation somehow and come up with reasonable artificial datasets on which to test this prior elicitation?
By Yaroslav Bulatov, at 2:03 AM
I think that is a separate issue, isn't it? The same argument applies to the analyst's prior: it must either be quantitatively constrained, or qualitatively constrained by some constraint with probability 1.
On the other hand, it's the same issue.
Oakley and O'Hagan argue that the model is less sensitive to the analyst's prior. What I was suggesting is that because to avoid quantitative priors we need to specify our qualitative priors with probability 1, we want to find families of priors we feel less uncomfortable about specifying with certainty. Maybe that means specifying analysts' priors instead of experts' priors...
On the subject of elicitation: my guess is it's a mess. Humans are so contradictory and illogical, I think you'd have to do some very serious empirical and analytical studies to get any sort of reasonable model of how experts estimate and mis-estimate parameters.
By E-Rock, at 8:54 PM
Post a Comment
<< Home