Specifically, anything more complex than a simple "tuple" (RDF or OWL, etc.), needs to be matched against a graph or partial graph.
One good "integrative" paper is Understanding Belief Propagation and its Generalizations by J.S. Yedidia, W.T. Freeman, and Y. Weiss (2002). One of their points [made right after Eqn. 18, w/r/t Belief Propagation and Graph Theory] is:
"In a practical computation .. one starts with the nodes at the edge of aaph ...There is more -- connecting with Ising, Bethe, and Kikuchi free energy models.
in general, it is easy to see that each message need only be computed once for
singly connected graphs [my note: not all will be singly-connected]. That means
that the whole computation takes a time proportional to the number of links in
thegraph, which is dramatically less than the exponentially large time that
would be required to compute marginal probabilities naively."
I particularly enjoy the connection with the Kikuchi Cluster Variation Method (CVM), which began in the 1950's as a relatively obscure approach to a more accurate assessment of free energy.