What are practical uses for STL's 'partial

2019-03-18 01:55发布

What/where are the practical uses of the partial_sum algorithm in STL?

What are some other interesting/non-trivial examples or use-cases?

11条回答
姐就是有狂的资本
2楼-- · 2019-03-18 02:44

You can use it to generate a monotonically increasing sequence of numbers. For example, the following generates a vector containing the numbers 1 through 42:

std::vector<int> v(42, 1);
std::partial_sum(v.begin(), v.end(), v.begin());

Is this an everyday use case? Probably not, though I've found it useful on several occasions.

You can also use std::partial_sum to generate a list of factorials. (This is even less useful, though, since the number of factorials that can be represented by a typical integer data type is quite limited. It is fun, though :-D)

std::vector<int> v(10, 1);
std::partial_sum(v.begin(), v.end(), v.begin());
std::partial_sum(v.begin(), v.end(), v.begin(), std::multiplies<int>());
查看更多
3楼-- · 2019-03-18 02:44

Personal Use Case: Roulette-Wheel-Selection

I'm using partial_sum in a roulette-wheel-selection algorithm (link text). This algorithm choses randomly elements from a container with a probability which is linear to some value given beforehands.

Because all my elements to choose from bringing a not-necessarily normalized value, I use the partial_sum algorithm for constructing something like a "roulette-wheel", because I sum up all the elements. Then I chose a random variable in this range (the last partial_sum is the sum of all) and use stl::lower_bound for searching "the wheel" where my random search landed. The element returned by the lower_bound algorithm is the chosen one.

Besides the advantage of clear and expressive code with the use of partial_sum, I could also gain some speed when experimenting with the GCC parallel mode which brings parallelized versions for some algorithms and one of them is the partial_sum (link text).

Another use I know of: One of the most important algorithmic primitives in parallel processing (but maybe a little bit away from STL)

If you're interested in heavy optimized algorithms which are using partial_sum (in this case maybe more results under the synonyms "scan" or "prefix_sum"), than go to the parallel algorithms community. They need it all the time. You won't find a parallel sorting algorithm based on quicksort or mergesort without using it. This operation is one of the most important parallel primitives used. I think it is most commonly used for calculating offsets in dynamic algorithms. Think of a partition step in quicksort, which is split and fed to the parallel threads. You don't know the number of elements in each slot of the partition before calculating it. So you need some offsets for all the threads for later access.

Maybe you will find more informatin in the now-hot topic of GPU processing. One short article regarding Nvidia's CUDA and the scan-primitive with a few application examples you will find in Chapter 39. Parallel Prefix Sum (Scan) with CUDA.

查看更多
ら.Afraid
4楼-- · 2019-03-18 02:44

Partial sums are often useful in parallel algorithms. Consider the code

for (int i=0; N>i; ++i) {
  sum += x[i];
  do_something(sum);
}

If you want to parallelise this code, you need to know the partial sums. I am using GNUs parallel version of partial_sum for something very similar.

查看更多
Explosion°爆炸
5楼-- · 2019-03-18 02:49

In nonparametric Bayesian methods there is a Metropolis-Hastings step (per observation) that determines to sample a new or an existing cluster. If an existing cluster has to be sampled this needs to be done with different weights. These weighted likelihoods are simulated in the following example code.

#include <random>                                                                                                       
#include <iostream>                                                                                                     
#include <algorithm>                                                                                                    

int main() {                                                                                                            

    std::default_random_engine generator(std::random_device{}());                                                       
    std::uniform_real_distribution<double> distribution(0.0,1.0);                                                       

    int K = 8;                                                                                                          

    std::vector<double> weighted_likelihood(K);                                                                         
    for (int i = 0; i < K; ++i) {                                                                                       
        weighted_likelihood[i] = i*10;                                                                                  
    }                                                                                                                   
    std::cout << "Weighted likelihood: ";                                                                               
    for (auto i: weighted_likelihood) std::cout << i << ' ';                                                            
    std::cout << std::endl;                                                                                             

    std::vector<double> cumsum_likelihood(K);                                                                           
    std::partial_sum(weighted_likelihood.begin(), weighted_likelihood.end(), cumsum_likelihood.begin());                

    std::cout << "Cumulative sum of weighted likelihood: ";                                                             
    for (auto i: cumsum_likelihood) std::cout << i << ' ';                                                              
    std::cout << std::endl;                                                                                             

    std::vector<int> frequency(K);                                                                                      

    int N = 280000;                                                                                                     
    for (int i = 0; i < N; ++i) {                                                                                       
        double pick = distribution(generator) * cumsum_likelihood.back();                                               

        auto lower = std::lower_bound(cumsum_likelihood.begin(), cumsum_likelihood.end(), pick);                        
        int index = std::distance(cumsum_likelihood.begin(), lower);                                                    
        frequency[index]++;                                                                                             
    }                                                                                                                   

    std::cout << "Frequencies: ";                                                                                       
    for (auto i: frequency) std::cout << i << ' ';                                                                      
    std::cout << std::endl;                                                                                             

}

Note that this is not different from the answer by https://stackoverflow.com/users/13005/steve-jessop. It's added to give a bit more context about a particular situation (nonparametric Bayesian mehods, e.g. the algorithms by Neal using the Dirichlet process as prior) and the actual code which uses partial_sum in combination with lower_bound.

查看更多
何必那么认真
6楼-- · 2019-03-18 02:51

I often use partial sum not to sum but to calculate the current value in the sequence depending on the previous.

For example, if you integrate a function. Each new step is a previous step, vt += dvdt or vt = integrate_step(dvdt, t_prev, t_prev+dt);.

查看更多
登录 后发表回答