Dataflow streaming job not scaleing past 1 worker

2019-07-22 15:08发布

问题:

My streaming dataflow job(2017-09-08_03_55_43-9675407418829265662) using Apache Beam SDK for Java 2.1.0 will not scale past 1 Worker even with a growing pubsub queue (now 100k Undelivered messages) – do you have any ideas why?

Its currently running with autoscalingAlgorithm=THROUGHPUT_BASED and maxNumWorkers=10.

回答1:

Dataflow Engineer here. I looked up the job in our backend and I can see that it is not scaling up because CPU utilization is low, meaning something else is limiting the performance of the pipeline, such as external throttling. Upscaling rarely helps in these cases.

I see that some bundles are taking up to hours to process. I recommend investigating your pipeline logic and see if there are other parts that can be optimized.



回答2:

This is what I ended up with:

import org.apache.beam.sdk.transforms.*;
import org.apache.beam.sdk.values.KV;
import org.apache.beam.sdk.values.PCollection;

import java.util.concurrent.ThreadLocalRandom;


public class ReshuffleWithRandomKey<T>
        extends PTransform<PCollection<T>, PCollection<T>> {

    private final int size;

    public ReshuffleWithRandomKey(int size) {
        this.size = size;
    }

    @Override
    public PCollection<T> expand(PCollection<T> input) {
        return input
                .apply("Random key", ParDo.of(new AssignRandomKeyFn<T>(size)))
                .apply("Reshuffle", Reshuffle.<Integer, T>of())
                .apply("Values", Values.<T>create());
    }

    private static class AssignRandomKeyFn<T> extends DoFn<T, KV<Integer, T>> {

        private final int size;

        AssignRandomKeyFn(int size) {
            this.size = size;
        }

        @ProcessElement
        public void process(ProcessContext c) {
            c.output(KV.of(ThreadLocalRandom.current().nextInt(0, size), c.element()));
        }
    }
}

What do you think @raghu-angadi and @scott-wegner?