Possible Duplicate:
[python]: path between two nodes
Can anyone point me to some resources on how to do this? I'm using networkx
as my python library.
Thanks!
Possible Duplicate:
[python]: path between two nodes
Can anyone point me to some resources on how to do this? I'm using networkx
as my python library.
Thanks!
This one actually works with networkx, and it's non-recursive, which may be nice for large graphs.
I'm not sure if there are special optimizations available -- before looking for any of them, I'd do a simple recursive solution, something like (using of networkx only the feature that indexing a graph by a node gives an iterable yielding its neighbor nodes [[a dict, in networkx's case, but I'm not making use of that in particular]])...:
This should be provably correct (but I'm not going to do the proof because it's very late and I'm tired and fuzzy-headed;-) and usable to verify any further optimizations;-).
First optimization I'd try would be some kind of simple memoizing: if I've already computed the set of paths from some node N to any goal node (whatever the prefix leading to N was when I did that computation), I can stash that away in a dict under key N and avoid further recomputations if and when I get to N again by a different route;-).
This is based on Alex Martelli's answer, but it should work. It depends on the expression
source_node.children
yielding an iterable that will iterate over all the children ofsource_node
. It also relies on there being a working way for the==
operator to compare two nodes to see if they are the same. Usingis
may be a better choice. Apparently, in the library you're using, the syntax for getting an iterable over all the children isgraph[source_node]
, so you will need to adjust the code accordingly.My main concern is that this does a depth first search, it will waste effort when there are several paths from the source to a node that's a grandchild, great grandchild, etc all of source, but not necessarily a parent of sink. If it memoized the answer for a given source and sink node it would be possible to avoid the extra effort.
Here is an example of how that would work:
This also allows you to save the memoization dictionary between invocations so if you need to compute the answer for multiple source and sink nodes you can avoid a lot of extra effort.