R
- type of the streampublic class LocalCacheStream<R> extends AbstractLocalCacheStream<R,Stream<R>> implements CacheStream<R>
AbstractLocalCacheStream.StreamSupplier<R>
CacheStream.SegmentCompletionListener
Stream.Builder<T>
intermediateOperations, keysToFilter, log, onCloseRunnables, parallel, registry, segmentsToFilter, streamSupplier
Constructor and Description |
---|
LocalCacheStream(AbstractLocalCacheStream.StreamSupplier<R> streamSupplier,
boolean parallel,
ComponentRegistry registry) |
LocalCacheStream(AbstractLocalCacheStream<?,?> other) |
Modifier and Type | Method and Description |
---|---|
boolean |
allMatch(Predicate<? super R> predicate) |
boolean |
anyMatch(Predicate<? super R> predicate) |
<R1,A> R1 |
collect(Collector<? super R,A,R1> collector) |
<R1> R1 |
collect(Supplier<R1> supplier,
BiConsumer<R1,? super R> accumulator,
BiConsumer<R1,R1> combiner) |
long |
count() |
CacheStream<R> |
disableRehashAware()
Disables tracking of rehash events that could occur to the underlying cache.
|
Stream<R> |
distinct() |
CacheStream<R> |
distributedBatchSize(int batchSize)
Controls how many keys are returned from a remote node when using a stream terminal operation with a distributed
cache to back this stream.
|
Stream<R> |
filter(Predicate<? super R> predicate) |
CacheStream<R> |
filterKeys(Set<?> keys)
Filters which entries are returned by only returning ones that map to the given key.
|
CacheStream<R> |
filterKeySegments(Set<Integer> segments)
Filters which entries are returned by what segment they are present in.
|
Optional<R> |
findAny() |
Optional<R> |
findFirst() |
<R1> Stream<R1> |
flatMap(Function<? super R,? extends Stream<? extends R1>> mapper) |
DoubleStream |
flatMapToDouble(Function<? super R,? extends DoubleStream> mapper) |
IntStream |
flatMapToInt(Function<? super R,? extends IntStream> mapper) |
LongStream |
flatMapToLong(Function<? super R,? extends LongStream> mapper) |
void |
forEach(Consumer<? super R> action) |
void |
forEachOrdered(Consumer<? super R> action) |
CloseableIterator<R> |
iterator() |
Stream<R> |
limit(long maxSize) |
<R1> Stream<R1> |
map(Function<? super R,? extends R1> mapper) |
DoubleStream |
mapToDouble(ToDoubleFunction<? super R> mapper) |
IntStream |
mapToInt(ToIntFunction<? super R> mapper) |
LongStream |
mapToLong(ToLongFunction<? super R> mapper) |
Optional<R> |
max(Comparator<? super R> comparator) |
Optional<R> |
min(Comparator<? super R> comparator) |
boolean |
noneMatch(Predicate<? super R> predicate) |
CacheStream<R> |
parallelDistribution()
This would enable sending requests to all other remote nodes when a terminal operator is performed.
|
Stream<R> |
peek(Consumer<? super R> action) |
Optional<R> |
reduce(BinaryOperator<R> accumulator) |
R |
reduce(R identity,
BinaryOperator<R> accumulator) |
<U> U |
reduce(U identity,
BiFunction<U,? super R,U> accumulator,
BinaryOperator<U> combiner) |
CacheStream<R> |
segmentCompletionListener(CacheStream.SegmentCompletionListener listener)
Allows registration of a segment completion listener that is notified when a segment has completed
processing.
|
CacheStream<R> |
sequentialDistribution()
This would disable sending requests to all other remote nodes compared to one at a time.
|
Stream<R> |
skip(long n) |
Stream<R> |
sorted() |
Stream<R> |
sorted(Comparator<? super R> comparator) |
Spliterator<R> |
spliterator() |
CacheStream<R> |
timeout(long timeout,
TimeUnit unit)
Sets a given time to wait for a remote operation to respond by.
|
Object[] |
toArray() |
<A> A[] |
toArray(IntFunction<A[]> generator) |
close, createStream, isParallel, onClose, parallel, sequential, unordered
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
builder, concat, empty, generate, iterate, of, of
close, isParallel, onClose, parallel, sequential, unordered
public LocalCacheStream(AbstractLocalCacheStream.StreamSupplier<R> streamSupplier, boolean parallel, ComponentRegistry registry)
public LocalCacheStream(AbstractLocalCacheStream<?,?> other)
public CacheStream<R> sequentialDistribution()
CacheStream
Parallel distribution is enabled by default except for CacheStream.iterator()
&
CacheStream.spliterator()
sequentialDistribution
in interface CacheStream<R>
public CacheStream<R> parallelDistribution()
CacheStream
Parallel distribution is enabled by default except for CacheStream.iterator()
&
CacheStream.spliterator()
parallelDistribution
in interface CacheStream<R>
public CacheStream<R> filterKeySegments(Set<Integer> segments)
CacheStream
Stream.filter(Predicate)
method as this can control what nodes are
asked for data and what entries are read from the underlying CacheStore if present.filterKeySegments
in interface CacheStream<R>
segments
- The segments to use for this stream operation. Any segments not in this set will be ignored.public CacheStream<R> filterKeys(Set<?> keys)
CacheStream
Stream.filter(Predicate)
if any keys must be retrieved remotely or if a
cache store is in use.filterKeys
in interface CacheStream<R>
keys
- The keys that this stream will only operate on.public CacheStream<R> distributedBatchSize(int batchSize)
CacheStream
CacheStream.iterator()
, CacheStream.spliterator()
,
CacheStream.forEach(Consumer)
. Please see those methods for additional information on how this value
may affect them.
This value may be used in the case of a a terminal operator that doesn't track keys if an intermediate
operation is performed that requires bringing keys locally to do computations. Examples of such intermediate
operations are CacheStream.sorted()
, CacheStream.sorted(Comparator)
,
CacheStream.distinct()
, CacheStream.limit(long)
, CacheStream.skip(long)
This value is always ignored when this stream is backed by a cache that is not distributed as all values are already local.
distributedBatchSize
in interface CacheStream<R>
batchSize
- The size of each batch. This defaults to the state transfer chunk size.public CacheStream<R> segmentCompletionListener(CacheStream.SegmentCompletionListener listener)
CacheStream
This method is designed for the sole purpose of use with the CacheStream.iterator()
to allow for
a user to track completion of segments as they are returned from the iterator. Behavior of other methods
is not specified. Please see CacheStream.iterator()
for more information.
Multiple listeners may be registered upon multiple invocations of this method. The ordering of notified listeners is not specified.
segmentCompletionListener
in interface CacheStream<R>
listener
- The listener that will be called back as segments are completed.public CacheStream<R> disableRehashAware()
CacheStream
Most terminal operations will run faster with rehash awareness disabled even without a rehash occuring. However if a rehash occurs with this disabled be prepared to possibly receive only a subset of values.
disableRehashAware
in interface CacheStream<R>
public IntStream mapToInt(ToIntFunction<? super R> mapper)
public LongStream mapToLong(ToLongFunction<? super R> mapper)
public DoubleStream mapToDouble(ToDoubleFunction<? super R> mapper)
mapToDouble
in interface Stream<R>
public IntStream flatMapToInt(Function<? super R,? extends IntStream> mapper)
flatMapToInt
in interface Stream<R>
public LongStream flatMapToLong(Function<? super R,? extends LongStream> mapper)
flatMapToLong
in interface Stream<R>
public DoubleStream flatMapToDouble(Function<? super R,? extends DoubleStream> mapper)
flatMapToDouble
in interface Stream<R>
public Stream<R> distinct()
CacheStream
This operation will be invoked both remotely and locally when used with a distributed cache backing this stream.
This operation will act as an intermediate iterator operation requiring data be brought locally for proper
behavior. This is described in more detail in the CacheStream
documentation
This intermediate iterator operation will be performed locally and remotely requiring possibly a subset of all elements to be in memory
Any subsequent intermediate operations and the terminal operation are then performed locally.
public Stream<R> sorted()
CacheStream
This operation is performed entirely on the local node irrespective of the backing cache. This
operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior.
Beware this means it will require having all entries of this cache into memory at one time. This is described in
more detail at CacheStream
Any subsequent intermediate operations and the terminal operation are also performed locally.
public Stream<R> sorted(Comparator<? super R> comparator)
CacheStream
This operation is performed entirely on the local node irrespective of the backing cache. This
operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior.
Beware this means it will require having all entries of this cache into memory at one time. This is described in
more detail at CacheStream
Any subsequent intermediate operations and the terminal operation are then performed locally.
public Stream<R> limit(long maxSize)
CacheStream
This intermediate operation will be performed both remotely and locally to reduce how many elements are sent back from each node. More specifically this operation is applied remotely on each node to only return up to the maxSize value and then the aggregated results are limited once again on the local node.
This operation will act as an intermediate iterator operation requiring data be brought locally for proper
behavior. This is described in more detail in the CacheStream
documentation
Any subsequent intermediate operations and the terminal operation are then performed locally.
public Stream<R> skip(long n)
CacheStream
This operation is performed entirely on the local node irrespective of the backing cache. This
operation will act as an intermediate iterator operation requiring data be brought locally for proper behavior.
This is described in more detail in the CacheStream
documentation
Depending on the terminal operator this may or may not require all entries or a subset after skip is applied to be in memory all at once.
Any subsequent intermediate operations and the terminal operation are then performed locally.
public void forEach(Consumer<? super R> action)
CacheStream
This operation is performed remotely on the node that is the primary owner for the key tied to the entry(s) in this stream.
NOTE: This method while being rehash aware has the lowest consistency of all of the operators. This
operation will be performed on every entry at least once in the cluster, as long as the originator doesn't go
down while it is being performed. This is due to how the distributed action is performed. Essentially the
CacheStream.distributedBatchSize(int)
value controls how many elements are processed per node at a time
when rehash is enabled. After those are complete the keys are sent to the originator to confirm that those were
processed. If that node goes down during/before the response those keys will be processed a second time.
It is possible to have the cache local to each node injected into this instance if the provided
Consumer also implements the CacheAware
interface. This method will be invoked
before the consumer accept()
method is invoked.
This method is ran distributed by default with a distributed backing cache. However if you wish for this
operation to run locally you can use the stream().iterator().forEachRemaining(action)
for a single
threaded variant. If you
wish to have a parallel variant you can use StreamSupport.stream(Spliterator, boolean)
passing in the spliterator from the stream. In either case remember you must close the stream after
you are done processing the iterator or spliterator..
public void forEachOrdered(Consumer<? super R> action)
forEachOrdered
in interface Stream<R>
public <A> A[] toArray(IntFunction<A[]> generator)
public R reduce(R identity, BinaryOperator<R> accumulator)
public Optional<R> reduce(BinaryOperator<R> accumulator)
public <U> U reduce(U identity, BiFunction<U,? super R,U> accumulator, BinaryOperator<U> combiner)
public <R1> R1 collect(Supplier<R1> supplier, BiConsumer<R1,? super R> accumulator, BiConsumer<R1,R1> combiner)
public <R1,A> R1 collect(Collector<? super R,A,R1> collector)
CacheStream
Note when using a distributed backing cache for this stream the collector must be marshallable. This
prevents the usage of Collectors
class. However you can use the
CacheCollectors
static factory methods to create a serializable wrapper, which then
creates the actual collector lazily after being deserialized. This is useful to use any method from the
Collectors
class as you would normally.
collect
in interface Stream<R>
collect
in interface CacheStream<R>
R1
- collected typeA
- intermediate collected type if applicableCacheCollectors
public Optional<R> min(Comparator<? super R> comparator)
public Optional<R> max(Comparator<? super R> comparator)
public CloseableIterator<R> iterator()
CacheStream
Usage of this operator requires closing this stream after you are done with the iterator. The preferred usage is to use a try with resource block on the stream.
This method has special usage with the CacheStream.SegmentCompletionListener
in
that as entries are retrieved from the next method it will complete segments.
This method obeys the CacheStream.distributedBatchSize(int)
. Note that when using methods such as
Stream.flatMap(Function)
that you will have possibly more than 1 element mapped to a given key
so this doesn't guarantee that many number of entries are returned per batch.
Note that the Iterator.remove()
method is only supported if no intermediate operations have been
applied to the stream and this is not a stream created from a Cache.values()
collection.
iterator
in interface BaseStream<R,Stream<R>>
iterator
in interface CacheStream<R>
public Spliterator<R> spliterator()
CacheStream
Usage of this operator requires closing this stream after you are done with the spliterator. The preferred usage is to use a try with resource block on the stream.
spliterator
in interface BaseStream<R,Stream<R>>
spliterator
in interface CacheStream<R>
public CacheStream<R> timeout(long timeout, TimeUnit unit)
CacheStream
If a timeout does occur then a TimeoutException
is thrown from the terminal
operation invoking thread or on the next call to the Iterator
or Spliterator
.
Note that if a rehash occurs this timeout value is reset for the subsequent retry if rehash aware is enabled.
timeout
in interface CacheStream<R>
timeout
- the maximum time to waitunit
- the time unit of the timeout argumentCopyright © 2023 JBoss, a division of Red Hat. All rights reserved.