Is there an elegant way to process a stream in chu

2020-02-02 10:08发布

My exact scenario is inserting data to database in batches, so I want to accumulate DOM objects then every 1000, flush them.

I implemented it by putting code in the accumulator to detect fullness then flush, but that seems wrong - the flush control should come from the caller.

I could convert the stream to a List then use subList in an iterative fashion, but that too seems clunky.

It there a neat way to take action every n elements then continue with the stream while only processing the stream once?

7条回答
不美不萌又怎样
2楼-- · 2020-02-02 10:52

Most of answers above do not use stream benefits like saving your memory. You can try to use iterator to resolve the problem

Stream<List<T>> chunk(Stream<T> stream, int size) {
  Iterator<T> iterator = stream.iterator();
  Iterator<List<T>> listIterator = new Iterator<>() {

    public boolean hasNext() {
      return iterator.hasNext();
    }

    public List<T> next() {
      List<T> result = new ArrayList<>(size);
      for (int i = 0; i < size && iterator.hasNext(); i++) {
        result.add(iterator.next());
      }
      return result;
    }
  };
  return StreamSupport.stream(((Iterable<List<T>>) () -> listIterator).spliterator(), false);
}
查看更多
倾城 Initia
3楼-- · 2020-02-02 10:53

If you have guava dependency on your project you could do this:

StreamSupport.stream(Iterables.partition(simpleList, 1000).spliterator(), false).forEach(...);

See https://google.github.io/guava/releases/23.0/api/docs/com/google/common/collect/Lists.html#partition-java.util.List-int-

查看更多
我欲成王,谁敢阻挡
4楼-- · 2020-02-02 10:56

Look's like no, cause creating chunks means reducing stream, and reduce means termination. If you need to maintain stream nature and process chunks without collecting all data before here is my code (does not work for parallel streams):

private static <T> BinaryOperator<List<T>> processChunks(Consumer<List<T>> consumer, int chunkSize) {
    return (data, element) -> {
        if (data.size() < chunkSize) {
            data.addAll(element);
            return data;
        } else {
            consumer.accept(data);
            return element; // in fact it's new data list
        }
    };
}

private static <T> Function<T, List<T>> createList(int chunkSize) {
    AtomicInteger limiter = new AtomicInteger(0);
    return element -> {
        limiter.incrementAndGet();
        if (limiter.get() == 1) {
            ArrayList<T> list = new ArrayList<>(chunkSize);
            list.add(element);
            return list;
        } else if (limiter.get() == chunkSize) {
            limiter.set(0);
        }
        return Collections.singletonList(element);
    };
}

and how to use

Consumer<List<Integer>> chunkProcessor = (list) -> list.forEach(System.out::println);

    int chunkSize = 3;

    Stream.generate(StrTokenizer::getInt).limit(13)
            .map(createList(chunkSize))
            .reduce(processChunks(chunkProcessor, chunkSize))
            .ifPresent(chunkProcessor);

static Integer i = 0;

static Integer getInt()
{
    System.out.println("next");
    return i++;
}

it will print

next next next next 0 1 2 next next next 3 4 5 next next next 6 7 8 next next next 9 10 11 12

the idea behind is to create lists in a map operation with 'pattern'

[1,,],[2],[3],[4,,]...

and merge (+process) that with reduce.

[1,2,3],[4,5,6],...

and don't forget to process the last 'trimmed' chunk with

.ifPresent(chunkProcessor);
查看更多
看我几分像从前
5楼-- · 2020-02-02 10:59

Using library StreamEx solution would look like

Stream<Integer> stream = IntStream.iterate(0, i -> i + 1).boxed().limit(15);
AtomicInteger counter = new AtomicInteger(0);
int chunkSize = 4;

StreamEx.of(stream)
        .groupRuns((prev, next) -> counter.incrementAndGet() % chunkSize != 0)
        .forEach(chunk -> System.out.println(chunk));

Output:

[0, 1, 2, 3]
[4, 5, 6, 7]
[8, 9, 10, 11]
[12, 13, 14]

groupRuns accepts predicate that decides whether 2 elements should be in the same group.

It produces a group as soon as it finds first element that does not belong to it.

查看更多
女痞
6楼-- · 2020-02-02 11:03

Elegance is in the eye of the beholder. If you don't mind using a stateful function in groupingBy, you can do this:

AtomicInteger counter = new AtomicInteger();

stream.collect(groupingBy(x->counter.getAndIncrement()/chunkSize))
    .values()
    .forEach(database::flushChunk);

This doesn't win any performance or memory usage points over your original solution because it will still materialize the entire stream before doing anything.

If you want to avoid materializing the list, stream API will not help you. You will have to get the stream's iterator or spliterator and do something like this:

Spliterator<Integer> split = stream.spliterator();
int chunkSize = 1000;

while(true) {
    List<Integer> chunk = new ArrayList<>(size);
    for (int i = 0; i < chunkSize && split.tryAdvance(chunk::add); i++){};
    if (chunk.isEmpty()) break;
    database.flushChunk(chunk);
}
查看更多
神经病院院长
7楼-- · 2020-02-02 11:04

As Misha rightfully said, Elegance is in the eye of the beholder. I personally think an elegant solution would be to let the class that inserts to the database do this task. Similar to a BufferedWriter. This way it does not depend on your original data structure and can be used even with multiple streams after one and another. I am not sure if this is exactly what you mean by having the code in the accumulator which you thought is wrong. I don't think it is wrong, since the existing classes like BufferedWriter work this way. You have some flush control from the caller this way by calling flush() on the writer at any point.

Something like the following code.

class BufferedDatabaseWriter implements Flushable {
    List<DomObject> buffer = new LinkedList<DomObject>();
    public void write(DomObject o) {
        buffer.add(o);
        if(buffer.length > 1000)
            flush();
    }
    public void flush() {
        //write buffer to database and clear it
    }
}

Now your stream gets processed like this:

BufferedDatabaseWriter writer = new BufferedDatabaseWriter();
stream.forEach(o -> writer.write(o));
//if you have more streams stream2.forEach(o -> writer.write(o));
writer.flush();

If you want to work multithreaded, you could run the flush asynchronous. The taking from the stream can't go in parallel but I don't think there is a way to count 1000 elements from a stream in parallel anyway.

You can also extend the writer to allow setting of the buffer size in constructor or you can make it implement AutoCloseable and run it in a try with ressources and more. The nice things you have from a BufferedWriter.

查看更多
登录 后发表回答