Constraint Satisfaction with Uncertainty

2019-08-15 11:26发布

问题:

I'm trying to solve a problem in which the satisfaction of constraints cannot always be verified. I can find lots of papers on flexible constraint satisfaction, but that's not quite what I want. Here's an example:

P(Jim likes Cheese) = 0.8
P(Joe likes Cheese) = 0.5
P(Sam likes Cheese) = 0.2
P(Jim and Sam are friends) = 0.9
P(Jim and Joe are friends) = 0.5
P(Joe and Sam are friends) = 0.7

Charlie is talking about two cheese-liking friends. Who is he most likely talking about?

I'm currently viewing this as a constraint satisfaction problem:

[likes cheese]   [likes cheese]
 |                           |
 | /-------[alldiff]-------\ |
 |/                         \|
[X]--------[friends]--------[Y]

  ?            ?             ?
  |            |             |
(Sam)        (Joe)         (Jim)

Are there existing ways for dealing with this type of CSP?

Is a CSP even the right way to frame the problem?

回答1:

For a propositional model (where each variable has a distinct name), you should have a look at probabilistic graphical models (in particular Markov networks). They are very closely related to SAT and CSP, since they are basically a generalization, but still fall into the same complexity class #P.

If you are interested in concise, first order representation of these models, you should look into statistical relational learning or first order probabilistic models (synonyms). Here, the model is expressed in a "lifted" form. E.g. possibly probabilistic constraints of the following form, using variables ranging over some object domain:

on(?x,?y) => largerThan(?y,?x)

Inferences with these models that do not rely on generating the ground model are done in the field of lifted probabilistic inference.



回答2:

This looks much more like statistical relational learning than constraint satisfaction. See in particular Probabilistic logic networks.



回答3:

If you are interested in unification aspects of probabilistic inference, then you need to look, as pointed out by Special Touch, at statistical relational models. The most prominent in that area is Markov Logic Networks (http://en.wikipedia.org/wiki/Markov_logic_network). The original paper even has a "friends and smokers" example very close to yours.

One way of solving MLNs and other probabilistic relational models is lifted probabilistic inference, which explicitly involves issues such as unification. Here is the link to a tutorial: https://www.biostat.wisc.edu/~natarasr/tutorials/lifted.htm. However, this is a relatively recent line of research and unlikely to be easily applicable in practice.

Another even more recent line of probabilistic relational models is probabilistic programming (there is a DARPA grant on the subject going on right now). You might want to check the languages Church, BLOG (Bayesian Logic) and Figaro, but again, those are recent research topics and not so easy to use.