graph algorithms on GPU

2019-01-30 04:23发布

问题:

the current GPU threads are somehow limited (memory limit, limit of data structures, no recursion...).

do you think it would be feasible to implement a graph theory problem on GPU. for example vertex cover? dominating set? independent set? max clique?....

is it also feasible to have branch-and-bound algorithms on GPUs? Recursive backtracking?

回答1:

You will be interested in

  1. Exploring the Limits of GPUs With Parallel Graph Algorithms

  2. Accelerating large graph algorithms on the GPU using CUDA.



回答2:

This is tangentially related to your question, but I've implemented a "recursive" backtracking algorithm for enumerating "self-avoiding walks" on a lattice (n.b.: the stack was simulated within the CUDA kernel, to avoid the overhead of creating local variables for a whole bunch of function calls). It's possible to do this efficiently, so I'm sure this can be adapted to a graph theoretical context. Here's a link to a seminar on the topic where I gave some general discussion about backtracking within the Single Instruction Multiple Data (SIMD) paradigm; it's a pdf of about 1MB in size http://bit.ly/9ForGS .

I don't claim to know about the wider literature on graph theoretical algorithms on GPUs, but hope the above helps a little.

(@TheMachineCharmer, thanks for the links.)