So I have some code using Java 8 streams, and it works. It does exactly what I need it to do, and it's legible (a rarity for functional programming). Towards the end of a subroutine, the code runs over a List of a custom pair type:
// All names Hungarian-Notation-ized for SO reading
class AFooAndABarWalkIntoABar
{
public int foo_int;
public BarClass bar_object;
....
}
List<AFooAndABarWalkIntoABar> results = ....;
The data here must be passed into other parts of the program as arrays, so they get copied out:
// extract either a foo or a bar from each "foo-and-bar" (fab)
int[] foo_array = results.stream()
.mapToInt (fab -> fab.foo_int)
.toArray();
BarClass[] bar_array = results.stream()
.map (fab -> fab.bar_object)
.toArray(BarClass[]::new);
And done. Now each array can go do its thing.
Except... that loop over the List twice bothers me in my soul. And if we ever need to track more information, they're likely going to add a third field, and then have to make a third pass to turn the 3-tuple into three arrays, etc. So I'm fooling around with trying to do it in a single pass.
Allocating the data structures is trivial, but maintaining an index for use by the Consumer seems hideous:
int[] foo_array = new int[results.size()];
BarClass[] bar_array = new BarClass[results.size()];
// the trick is providing a stateful iterator across the array:
// - can't just use 'int', it's not effectively final
// - an actual 'final int' would be hilariously wrong
// - "all problems can be solved with a level of indirection"
class Indirection { int iterating = 0; }
final Indirection sigh = new Indirection();
// equivalent possibility is
// final int[] disgusting = new int[]{ 0 };
// and then access disgusting[0] inside the lambda
// wash your hands after typing that code
results.stream().forEach (fab -> {
foo_array[sigh.iterating] = fab.foo_int;
bar_array[sigh.iterating] = fab.bar_object;
sigh.iterating++;
});
This produces identical arrays as the existing solution using multiple stream loops. And it does so in about half the time, go figure. But the iterator indirection tricks seem so unspeakably ugly, and of course preclude any possibility of populating the arrays in parallel.
Using a pair of ArrayList instances, created with appropriate capacity, would let the Consumer code simply call add for each instance, and no external iterator needed. But ArrayList's toArray(T[]) has to perform a copy of the storage array again, and in the int case there's boxing/unboxing on top of that.
(edit: The answers to the "possible duplicate" question all talk about only maintaining the indices in a stream, and using direct array indexing to get to the actual data during filter/map calls, along with a note that it doesn't really work if the data isn't accessible by direct index. While this question has a List and is "directly indexable" only from a viewpoint of "well, List#get exists, technically". If the results collection above is a LinkedList, for example, then calling an O(n) get N times with nonconsecutive index would be... bad.)
Are there other, better, possibilities that I'm missing? I thought a custom Collector might do it, but I can't figure out how to maintain the state there either and never even got as far as scratch code.
As the size of stream is known, there is no reason of reinventing the wheel again. The simplest solution is usually the best one. The second approach you have shown is nearly there - just use AtomicIntegeras array index and you will achieve your goal - single pass over data, and possible parralel stream execution ( due to AtomicInteger).
SO
AtomicInteger index=new AtomicInteger()
results.parallelStream().forEach (fab -> {
int idx=index.getAndIncrement();
foo_array[idx] = fab.foo_int;
bar_array[idx] = fab.bar_object;
});
Thread safe for parralel execution. One iteratio over whole collection
If your prerequisites are that both, iterating the list and accessing the list via an index, are expensive operations, there is no chance of getting a benefit from the parallel Stream processing. You can try to go with this answer, if you don’t need the result values in the original list order.
Otherwise, you can’t benefit from the parallel Stream processing as it requires the source to be able to efficiently split its contents into two halves, which implies either, random access or fast iteration. If the source has no customized spliterator, the default implementation will try to enable parallel processing via buffering elements into an array, which already implies iterating before the parallel processing even starts and having additional array storage costs where your sole operation is an array storage operation anyway.
When you accept that there is no benefit from parallel processing, you can stay with your sequential solution, but solve the ugliness of the counter by moving it into the Consumer. Since lambda expressions don’t support this, you can turn to the good old anonymous inner class:
int[] foo_array = new int[results.size()];
BarClass[] bar_array = new BarClass[results.size()];
results.forEach(new Consumer<AFooAndABarWalkIntoABar>() {
int index=0;
public void accept(AFooAndABarWalkIntoABar t) {
foo_array[index]=t.foo_int;
bar_array[index]=t.bar_object;
index++;
}
});
Of course, there’s also the often-overlooked alternative of the good old for-loop:
int[] foo_array = new int[results.size()];
BarClass[] bar_array = new BarClass[results.size()];
int index=0;
for(AFooAndABarWalkIntoABar t: results) {
foo_array[index]=t.foo_int;
bar_array[index]=t.bar_object;
index++;
}
I wouldn’t be surprised, if this beats all other alternatives performance-wise for your scenario…
A way to reuse an index in a stream is to wrap your lambda in an IntStream that is in charge of incrementing the index:
IntStream.range(0, results.size()).forEach(i -> {
foo_array[i] = results.get(i).foo_i;
bar_array[i] = results.get(i).bar_object;
});
With regards to Antoniossss's answer, using an IntStream seems like a slightly preferable option to using AtomicInteger:
It also works with parallel();
Two less local variable;
Leaves the Stream API in charge of parallel processing;
Two less lines of code.
EDIT: as Mikhail Prokhorov pointed out, calling the get method twice on implementations such as LinkedList will be slower than other solutions, given the O(n) complexity of their implementations of get. This can be fixed with:
AFooAndABarWalkIntoABar temp = results.get(i);
foo_array[i] = temp.foo_i;
bar_array[i] = temp.bar_object;
Java 12 adds a teeing collector which provides an approach to do this in one pass. Here is some example code using Apache Commons Pair class.
import org.apache.commons.lang3.tuple.Pair;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
class Scratch {
public static void main(String[] args) {
final Stream<Pair<String, String>> pairs = Stream.of(
Pair.of("foo1", "bar1"),
Pair.of("foo2", "bar2"),
Pair.of("foo3", "bar3")
);
final Pair<List<String>, List<String>> zipped = pairs
.collect(Collectors.teeing(
Collectors.mapping(Pair::getLeft, Collectors.toList()),
Collectors.mapping(Pair::getRight, Collectors.toList()),
(lefts, rights) -> Pair.of(lefts, rights)
));
// Then get the arrays out
String[] lefts = zipped.getLeft().toArray(String[]::new);
String[] rights = zipped.getRight().toArray(String[]::new);
System.out.println(Arrays.toString(lefts));
System.out.println(Arrays.toString(rights));
}
}
The output will be
[foo1, foo2, foo3]
[bar1, bar2, bar3]
It does not require the stream size to be known ahead of time.
Related
This is a bit of a fundamental question with an example project, as I'm still learning the best practices of Java 8 features.
Say I have an Order object, that cointains a List of OrderDetail. At the same time, OrderDetail contains a source and a destiny, along with quantity and product. OrderDetail also has an order (fk).
For this example, I will be moving Products from source to destiny, which both are ProductWarehouse objects with an availableStock property, which will be affected by the result of the Order.
Now, I need to update the availableStock for all the sources and destiny-ies. The stock should increase for destiny-ies and decrease for sources by the quantity of the OrderDetail. Since destiny and source are of the same type I can update them at the same time. The problem is that, of course, they are different properties.
My first thought was as follows:
//Map keeps record of the sum of products moved from the source. Key is the id the Product
HashMap<Integer, Double> quantities;
ArrayList<OrderDetail> orderDetails;
/**
* This could've been a single stream, but I divided it in two for readability here
* If not divided, the forEach() method from ArrayList<> might been enough though
*/
public void updateStock() {
List<ProductWarehouse> updated = new ArrayList<>();
orderDetails.stream()
.map(OrderDetail::getSource)
.forEach(src -> {
src.setAvailableStock(src.getAvailableStock - quantities.get(src.getProductId()));
updated.add(src);
});
orderDetails.stream()
.map(OrderDetail::getDestiny)
.forEach(dst -> {
dst.setAvailableStock(dst.getAvailableStock + quantities.get(dst.getProductId()));
updated.add(dst);
});
productWarehouseRepository.save(updated);
}
While this works, there is a problem: This is like the example in the Javadoc about "unnecesary side efects". It is not strictly the same, but given how similar they are makes me think I'm taking avoidable risks.
Another option I thought of is using peek() which has it's own implications since that method was thought mainly as a debugging tool, according to the javadoc.
HashMap<Integer, Double> quantities;
ArrayList<OrderDetail> orderDetails;
/**
* Either make it less readable and do one save or do one save for source and one for destiny
* I could also keep the updated list from before and addAll() the result of each .collect()
*/
public void updateStock() {
productWarehouseRepository.save(
orderDetails.stream()
.map(/*Same as above*/)
.peek(src -> {/*Same as above*/})
.collect(toList()) //static import
);
}
I've also read in a few blog post around that peek should be avoided. This, again, because the doc says what it's usage should be (mostly).
Hence the question: If I'd want to modify the properties of a Collection using the Stream API, what would be the best way to do so, following best-practices? For an example like this, is it better to use Iterable.forEach() or a simple enhanced for-loop? I honestly don't see where the side-effect might arise with operations like these using Stream, but that's probably due lack of experience and understanding of the API.
The code in this quesiton is just for example purposes, but if it's easier to understand with models or more info, I can add it.
You want to perform two different operations.
Update each ProductWarehouse object
Collect all ProductWarehouse objects into a list, for the save
Don’t mix the two operations.
public void updateStock() {
orderDetails.forEach(o -> {
ProductWarehouse src = o.getSource();
src.setAvailableStock(src.getAvailableStock()-quantities.get(src.getProductId()));
ProductWarehouse dst = o.getDestiny();
dst.setAvailableStock(dst.getAvailableStock()+quantities.get(dst.getProductId()));
});
List<ProductWarehouse> updated = orderDetails.stream()
.flatMap(o -> Stream.of(o.getSource(), o.getDestiny()))
.collect(Collectors.toList());
productWarehouseRepository.save(updated);
}
There is no point in trying to perform everything in a single stream operation at all costs. In the rare case where iterating the source twice is expense or not guaranteed to produce the same elements, you can iterate over the result list, e.g.
public void updateStock() {
List<ProductWarehouse> updated = orderDetails.stream()
.flatMap(o -> Stream.of(o.getSource(), o.getDestiny()))
.collect(Collectors.toList());
for(int ix = 0; ix < updated.size(); ) {
ProductWarehouse src = updated.get(ix++);
src.setAvailableStock(src.getAvailableStock()-quantities.get(src.getProductId()));
ProductWarehouse dst = updated.get(ix++);
dst.setAvailableStock(dst.getAvailableStock()+quantities.get(dst.getProductId()));
}
productWarehouseRepository.save(updated);
}
This second iteration is definitely cheap.
In both cases, forEach is a terminal action, so you should use map instead
If you want to avoid side-effects, then call your setter method within a map, then collect that to a list, which you add to the outer list.
updated.addAll(orderDetails.stream().map()...collect(Collectors.toList())
Related solution - How to add elements of a Java8 stream into an existing List
I have a List defined as follows:
List<Integer> list1 = new ArrayList<>();
list1.add(1);
list1.add(2);
How can I increment each element of the List by one (i.e. end up with a List [2,3]) using Java 8's Stream API without creating new List?
When you create a Stream from the List, you are not allowed to modify the source List from the Stream as specified in the “Non-interference” section of the package documentation. Not obeying this constraint can result in a ConcurrentModificationException or, even worse, a corrupted data structure without getting an exception.
The only solution to directly manipulate the list using a Java Stream, is to create a Stream not iterating over the list itself, i.e. a stream iterating over the indices like
IntStream.range(0, list1.size()).forEach(ix -> list1.set(ix, list1.get(ix)+1));
like in Eran’s answer
But it’s not necessary to use a Stream here. The goal can be achieved as simple as
list1.replaceAll(i -> i + 1);
This is a new List method introduced in Java 8, also allowing to smoothly use a lambda expression. Besides that, there are also the probably well-known Iterable.forEach, the nice Collection.removeIf, and the in-place List.sort method, to name other new Collection operations not involving the Stream API. Also, the Map interface got several new methods worth knowing.
See also “New and Enhanced APIs That Take Advantage of Lambda Expressions and Streams in Java SE 8” from the official documentation.
Holger's answer is just about perfect. However, if you're concerned with integer overflow, then you can use another utility method that was released in Java 8: Math#incrementExact. This will throw an ArithmeticException if the result overflows an int. A method reference can be used for this as well, as seen below:
list1.replaceAll(Math::incrementExact);
You can iterate over the indices via an IntStream combined with forEach:
IntStream.range(0,list1.size()).forEach(i->list1.set(i,list1.get(i)+1));
However, this is not much different than a normal for loop, and probably less readable.
reassign the result to list1:
list1 = list1.stream().map(i -> i+1).collect(Collectors.toList());
public static Function<Map<String, LinkedList<Long>>, Map<String, LinkedList<Long>>> applyDiscount = (
objectOfMAp) -> {
objectOfMAp.values().forEach(listfLong -> {
LongStream.range(0, ((LinkedList<Long>) listfLong).size()).forEach(index -> {
Integer position = (int) index;
Double l = listfLong.get(position) - (10.0 / 100 * listfLong.get(position));
listfLong.set(position, l.longValue());
});
});
return objectOfMAp;
};
How do you create stream of Boolean.FALSE, say, length of 100?
What I've struggled with is:
Originally I've intended to create an array of Boolean.FALSE. But new Boolean[100] returns an array of NULL. So reasonably I considered to use stream API as a convenient Iterable and almost (1) Iterable manipulation tool;
There is no Boolean no-params constructor (2), hence I can't
use Stream.generate(), since it accepts Supplier<T> (3).
What I found is Stream.iterate(Boolean.FALSE, bool -> Boolean.FALSE).limit(100); gives what I want, but it doesn't seem to be quite elegant solution, IMHO.
One more option, I found (4) is IntStream.range(0, 100).mapToObj(idx -> Boolean.FALSE);, which seems to me even more strange.
Despite these options don't violate pipeline conception of a stream API, are there any more concise ways to create stream of Boolean.FALSE?
Even though Boolean has no no-arg constructor, you can still use Stream.generate using a lambda:
Stream.generate(() -> Boolean.FALSE).limit(100)
This also has the advantage (compared to using a constructor) that those will be the same Boolean instances, and not 100 different but equal ones.
You can use Collections's static <T> List<T> nCopies(int n, T o):
Collections.nCopies (100, Boolean.FALSE).stream()...
Note that the List returned by nCopies is tiny (it contains a single reference to the data object)., so it doesn't require more storage compared to the Stream.generate().limit() solution, regardless of the required size.
of course you could create the stream directly
Stream.Builder<Boolean> builder = Stream.builder();
for( int i = 0; i < 100; i++ )
builder.add( false );
Stream<Boolean> stream = builder.build();
When would you use collect() vs reduce()? Does anyone have good, concrete examples of when it's definitely better to go one way or the other?
Javadoc mentions that collect() is a mutable reduction.
Given that it's a mutable reduction, I assume it requires synchronization (internally) which, in turn, can be detrimental to performance. Presumably reduce() is more readily parallelizable at the cost of having to create a new data structure for return after every step in the reduce.
The above statements are guesswork however and I'd love an expert to chime in here.
reduce is a "fold" operation, it applies a binary operator to each element in the stream where the first argument to the operator is the return value of the previous application and the second argument is the current stream element.
collect is an aggregation operation where a "collection" is created and each element is "added" to that collection. Collections in different parts of the stream are then added together.
The document you linked gives the reason for having two different approaches:
If we wanted to take a stream of strings and concatenate them into a
single long string, we could achieve this with ordinary reduction:
String concatenated = strings.reduce("", String::concat)
We would get the desired result, and it would even work in parallel.
However, we might not be happy about the performance! Such an
implementation would do a great deal of string copying, and the run
time would be O(n^2) in the number of characters. A more performant
approach would be to accumulate the results into a StringBuilder,
which is a mutable container for accumulating strings. We can use the
same technique to parallelize mutable reduction as we do with ordinary
reduction.
So the point is that the parallelisation is the same in both cases but in the reduce case we apply the function to the stream elements themselves. In the collect case we apply the function to a mutable container.
The reason is simply that:
collect() can only work with mutable result objects.
reduce() is designed to work with immutable result objects.
"reduce() with immutable" example
public class Employee {
private Integer salary;
public Employee(String aSalary){
this.salary = new Integer(aSalary);
}
public Integer getSalary(){
return this.salary;
}
}
#Test
public void testReduceWithImmutable(){
List<Employee> list = new LinkedList<>();
list.add(new Employee("1"));
list.add(new Employee("2"));
list.add(new Employee("3"));
Integer sum = list
.stream()
.map(Employee::getSalary)
.reduce(0, (Integer a, Integer b) -> Integer.sum(a, b));
assertEquals(Integer.valueOf(6), sum);
}
"collect() with mutable" example
E.g. if you would like to manually calculate a sum using collect() it can not work with BigDecimal but only with MutableInt from org.apache.commons.lang.mutable for example. See:
public class Employee {
private MutableInt salary;
public Employee(String aSalary){
this.salary = new MutableInt(aSalary);
}
public MutableInt getSalary(){
return this.salary;
}
}
#Test
public void testCollectWithMutable(){
List<Employee> list = new LinkedList<>();
list.add(new Employee("1"));
list.add(new Employee("2"));
MutableInt sum = list.stream().collect(
MutableInt::new,
(MutableInt container, Employee employee) ->
container.add(employee.getSalary().intValue())
,
MutableInt::add);
assertEquals(new MutableInt(3), sum);
}
This works because the accumulator container.add(employee.getSalary().intValue()); is not supposed to return a new object with the result but to change the state of the mutable container of type MutableInt.
If you would like to use BigDecimal instead for the container you could not use the collect() method as container.add(employee.getSalary()); would not change the container because BigDecimal it is immutable.
(Apart from this BigDecimal::new would not work as BigDecimal has no empty constructor)
The normal reduction is meant to combine two immutable values such as int, double, etc. and produce a new one; it’s an immutable reduction. In contrast, the collect method is designed to mutate a container to accumulate the result it’s supposed to produce.
To illustrate the problem, let's suppose you want to achieve Collectors.toList() using a simple reduction like
List<Integer> numbers = stream.reduce(
new ArrayList<Integer>(),
(List<Integer> l, Integer e) -> {
l.add(e);
return l;
},
(List<Integer> l1, List<Integer> l2) -> {
l1.addAll(l2);
return l1;
});
This is the equivalent of Collectors.toList(). However, in this case you mutate the List<Integer>. As we know the ArrayList is not thread-safe, nor is safe to add/remove values from it while iterating so you will either get concurrent exception or ArrayIndexOutOfBoundsException or any kind of exception (especially when run in parallel) when you update the list or the combiner tries to merge the lists because you are mutating the list by accumulating (adding) the integers to it. If you want to make this thread-safe you need to pass a new list each time which would impair performance.
In contrast, the Collectors.toList() works in a similar fashion. However, it guarantees thread safety when you accumulate the values into the list. From the documentation for the collect method:
Performs a mutable reduction operation on the elements of this stream using a Collector. If the stream is parallel, and the Collector is concurrent, and either
the stream is unordered or the collector is unordered, then a
concurrent reduction will be performed. When executed in parallel, multiple intermediate results may be instantiated, populated, and merged so as to maintain isolation of mutable data structures. Therefore, even when executed in parallel with non-thread-safe data structures (such as ArrayList), no additional synchronization is needed for a parallel reduction.
So to answer your question:
When would you use collect() vs reduce()?
if you have immutable values such as ints, doubles, Strings then normal reduction works just fine. However, if you have to reduce your values into say a List (mutable data structure) then you need to use mutable reduction with the collect method.
Let the stream be a <- b <- c <- d
In reduction,
you will have ((a # b) # c) # d
where # is that interesting operation that you would like to do.
In collection,
your collector will have some kind of collecting structure K.
K consumes a.
K then consumes b.
K then consumes c.
K then consumes d.
At the end, you ask K what the final result is.
K then gives it to you.
They are very different in the potential memory footprint during the runtime. While collect() collects and puts all data into the collection, reduce() explicitly asks you to specify how to reduce the data that made it through the stream.
For example, if you want to read some data from a file, process it, and put it into some database, you might end up with java stream code similar to this:
streamDataFromFile(file)
.map(data -> processData(data))
.map(result -> database.save(result))
.collect(Collectors.toList());
In this case, we use collect() to force java to stream data through and make it save the result into the database. Without collect() the data is never read and never stored.
This code happily generates a java.lang.OutOfMemoryError: Java heap space runtime error, if the file size is large enough or the heap size is low enough. The obvious reason is that it tries to stack all the data that made it through the stream (and, in fact, has already been stored in the database) into the resulting collection and this blows up the heap.
However, if you replace collect() with reduce() -- it won't be a problem anymore as the latter will reduce and discard all the data that made it through.
In the presented example, just replace collect() with something with reduce:
.reduce(0L, (aLong, result) -> aLong, (aLong1, aLong2) -> aLong1);
You do not need even to care to make the calculation depend on the result as Java is not a pure FP (functional programming) language and cannot optimize out the data that is not being used at the bottom of the stream because of the possible side-effects.
Here is the code example
List<Integer> list = Arrays.asList(1,2,3,4,5,6,7);
int sum = list.stream().reduce((x,y) -> {
System.out.println(String.format("x=%d,y=%d",x,y));
return (x + y);
}).get();
System.out.println(sum);
Here is the execute result:
x=1,y=2
x=3,y=3
x=6,y=4
x=10,y=5
x=15,y=6
x=21,y=7
28
Reduce function handle two parameters, the first parameter is the previous return value int the stream, the second parameter is the current
calculate value in the stream, it sum the first value and current value as the first value in next caculation.
According to the docs
The reducing() collectors are most useful when used in a multi-level reduction, downstream of groupingBy or partitioningBy. To perform a simple reduction on a stream, use Stream.reduce(BinaryOperator) instead.
So basically you'd use reducing() only when forced within a collect.
Here's another example:
For example, given a stream of Person, to calculate the longest last name
of residents in each city:
Comparator<String> byLength = Comparator.comparing(String::length);
Map<String, String> longestLastNameByCity
= personList.stream().collect(groupingBy(Person::getCity,
reducing("", Person::getLastName, BinaryOperator.maxBy(byLength))));
According to this tutorial reduce is sometimes less efficient
The reduce operation always returns a new value. However, the accumulator function also returns a new value every time it processes an element of a stream. Suppose that you want to reduce the elements of a stream to a more complex object, such as a collection. This might hinder the performance of your application. If your reduce operation involves adding elements to a collection, then every time your accumulator function processes an element, it creates a new collection that includes the element, which is inefficient. It would be more efficient for you to update an existing collection instead. You can do this with the Stream.collect method, which the next section describes...
So the identity is "re-used" in a reduce scenario, so slightly more efficient to go with .reduce if possible.
There is a very good reason to always prefer collect() vs the reduce() method. Using collect() is much more performant, as explained here:
Java 8 tutorial
*A mutable reduction operation(such as Stream.collect()) collects the stream elements in a mutable result container(collection) as it processes them.
Mutable reduction operations provide much improved performance when compared to an immutable reduction operation(such as Stream.reduce()).
This is due to the fact that the collection holding the result at each step of reduction is mutable for a Collector and can be used again in the next step.
Stream.reduce() operation, on the other hand, uses immutable result containers and as a result needs to instantiate a new instance of the container at every intermediate step of reduction which degrades performance.*
I have a list of Strings and I want to perform the same operation on all of the Strings in the list.
Is it possible without performing a loop?
Well something's got to loop, somewhere - if you want to abstract that into your own method, you could do so, but I don't believe there's anything built into the framework.
Guava has various methods in Iterables to perform projections etc, but if you want to modify the list on each step, I'm not sure there's any support for that. Again, you could write your own method (extremely simply) should you wish to.
When Java eventually gets closures, this sort of thing will become a lot more reasonable - at the moment, specifying the "something" operation is often more effort than it's worth compared with hard-coding the loop, unfortunately.
You could do it recursively, but I don't see why you'd want to. You may be able to find something similar to Python's map function (which, behind the scenes, would either be a loop or a recursive method)
Also note that strings are immutable - so you'll have to create 'copies' anyway.
No. You must loop through the list.
for(String s:yourlist){
dooperation(s);
}
Why do you not want to perform a loop?
If it's computational complexity, then no, it's unavoidable. All methods will essentially boil down to iterating over every item in the list.
If it's because you want something cleaner, then the answer depends on what you think is cleaner. There are various libraries that add some form of functional map, which would end up with something like:
map(list, new Mapper<String, String>() {
public String map(String input) {
return doSomethingToString(input);
}
);
This is obviously more long winded and complex than a simple loop
for (int i = 0; i < list.size(); i += 1) {
list[i] = doSomethingToString(list[i]);
}
But it does offer reusability.
map(list, new DoSomethingToStringMapper());
map(otherlist, new DoSomethingToStringMapper());
But probably you don't need this. A simple loop would be the way to go.
You could use apache commons util.
sorry, you have to iterate through the list somehow, and the best way is in a loop.
Depending on what you mean by no loop, this may interest you:
a map function for java.
http://www.gubatron.com/blog/2010/08/31/map-function-in-java/
...there's still a loop down inside of it.
In Java you'll need to iterate over the elements in the Collection and apply the method. I know Groovy offers the * syntax to do this. You could create an interface for your functions e.g. with an apply method and write a method which takes your Collection and the interface containing the function to apply if you want to add some general API for doing this. But you'll need the iteration somewhere!
Use divide and conquer with multithreaded traversal. Make sure you return new/immutable transformed collection objects (if you want to avoid concurrency issues), and then you can finally merge (may be using another thread which will wake up after all the worker threads finished transformer tasks on the divided lists?).
If lack of memory in creating these intermediate collections, then synchronize on your source collection. Thats the best you can do.
No you have to use a loop for that.
You have to perform the operation on each reference variable to the Strings in the List, so a loop is required.
If its at the List level, obviously there are some operations (removeAll, etc.).
The java API provides special class to store and manipulate group of objects. One Such Class is Arraylist
Note that Arraylist class is in java.util.ArrayList
Create an ArrayList as you would any objects.
import java.util.ArrayList;
//..
ArrayList ajay = new ArrayList();
Here
ArrayList -> Class
ajay -> object
You can optionally specify a capacity and type of objects the Arraylist will hold:
ArrayList ajay<String> = new ArrayList<String>(10);
The Arraylist class provides a number of useful methods for manipulating objects..
The add() method adds new objects to the ArrayList.And remove() method remove objects from the List..
Sample code:
import java.util.ArrayList;
public class MyClass {
public static void main(String[ ] args) {
ArrayList<String> ajay = new ArrayList<String>();
ajay.add("Red");
ajay.add("Blue");
ajay.add("Green");
ajay.add("Orange");
ajay.remove("Green");
System.out.println(colors);
}
}
Output for this Code:
[Red,Blue,Orange]
Accepted answer link is broken and solution offered is deprecated:
CollectionUtils::forAllDo
#Deprecated
public static <T,C extends Closure<? super T>> C forAllDo(Iterable<T> collection, C closure)
Deprecated. since 4.1, use IterableUtils.forEach(Iterable, Closure) instead
Executes the given closure on each element in the collection.
If the input collection or closure is null, there is no change made.
You can use IterableUtils::forEach(Closure c)
Applies the closure to each element of the provided iterable.