Related
In java (either using external libraries or not) I need to take a list of approximately 500,000 values and find the most frequently occurring (mode) 1000. Doing my best to keep the complexity to a minimum.
What I've tried so far, make a hash, but I can't because it would have to be backwards key=count value =string, otherwise when getting the top 1000, my complexity will be garbage. and the backwards way doesn't really work great because I would be having a terrible complexity for insertion as I search for where my string is to be able to remove it and insert it one higher...
I've tried using a binary search tree, but that had the same issue of what the data would be for sorting, either on the count or the string. If it's on the string then getting the count for the top 1000 is bad, and vice versa insertion is bad.
I could sort the list first (by string) and then iterate over the list and keep a count until it changes strings. but what data structure should I use to keep track of the top 1000?
Thanks
I would first create a Map<String, Long> to store the frequency of each word. Then, I'd sort this map by value in descending order and finally I'd keep the first 1000 entries.
In code:
List<String> top1000Words = listOfWords.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
.entrySet().stream()
.sorted(Map.Entry.comparingByValue().reversed())
.limit(1000)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
You might find it cleaner to separate the above into 2 steps: first collecting to the map of frequencies and then sorting its entries by value and keeping the first 1000 entries.
I'd separate this into three phases:
Count word occurrences (e.g. by using a HashMap<String, Integer>)
Sort the results (e.g. by converting the map into a list of entries and ordering by value descending)
Output the top 1000 entries of the sorted results
The sorting will be slow if the counts are small (e.g. if you've actually got 500,000 separate words) but if you're expecting lots of duplicate words, it should be fine.
I have had this question open for a few days now and have decided to rebel against Federico's elegant Java 8 answer and submit the least Java 8 answer possible.
The following code makes use of a helper class that associates a tally with a string.
public class TopOccurringValues {
static HashMap<String, StringCount> stringCounts = new HashMap<>();
// set low for demo. Change to 1000 (or whatever)
static final int TOP_NUMBER_TO_COLLECT = 10;
public static void main(String[] args) {
// load your strings in here
List<String> strings = loadStrings();
// tally up string occurrences
for (String string: strings) {
StringCount stringCount = stringCounts.get(string);
if (stringCount == null) {
stringCount = new StringCount(string);
}
stringCount.increment();
stringCounts.put(string, stringCount);
}
// sort which have most
ArrayList<StringCount> sortedCounts = new ArrayList<>(stringCounts.values());
Collections.sort(sortedCounts);
// collect the top occurring strings
ArrayList<String> topCollection = new ArrayList<>();
int upperBound = Math.min(TOP_NUMBER_TO_COLLECT, sortedCounts.size());
System.out.println("string\tcount");
for (int i = 0; i < upperBound; i++) {
StringCount stringCount = sortedCounts.get(i);
topCollection.add(stringCount.string);
System.out.println(stringCount.string + "\t" + stringCount.count);
}
}
// in this demo, strings are randomly generated numbers.
private static List<String> loadStrings() {
Random random = new Random(1);
ArrayList<String> randomStrings = new ArrayList<>();
for (int i = 0; i < 5000000; i++) {
randomStrings.add(String.valueOf(Math.round(random.nextGaussian() * 1000)));
}
return randomStrings;
}
static class StringCount implements Comparable<StringCount> {
int count = 0;
String string;
StringCount(String string) {this.string = string;}
void increment() {count++;}
#Override
public int compareTo(StringCount o) {return o.count - count;}
}
}
55 lines of code! It's like reverse code golf. The String generator creates 5 million strings instead of 500,000 because: why not?
string count
-89 2108
70 2107
77 2085
-4 2077
36 2077
65 2072
-154 2067
-172 2064
194 2063
-143 2062
The randomly generated strings can have values between -999 and 999 but because we are getting gaussian values, we will see numbers with higher scores that are closer to 0.
The Solution I chose to use was to first make a hash map with key value pairs as . I got the count by iterating over a linked list, and inserting the key value pair, Before insertion I would check for existence and if so increase the count. That part was quite straight forward.
The next part where I needed to sort it according to it's value, I used a library called guava published by google and it was able to make it very easy to sort by value instead of key using what they called a multimap. where they in a sense reverse the hash, and allow multiple values to be mapped to one key, so that I can have all my top 1000, opposed to some solutions mentioned above which didn't allow that, and would cause me to just get one value per key.
The last step was to iterate over the multimap (backwards) to get the 1000 most frequent occurrences.
Have a look at the code of the function if you're interested
private static void FindNMostFrequentOccurences(ArrayList profileName,int n) {
HashMap<String, Integer> hmap = new HashMap<String, Integer>();
//iterate through our data
for(int i = 0; i< profileName.size(); i++){
String current_id = profileName.get(i).toString();
if(hmap.get(current_id) == null){
hmap.put(current_id, 1);
} else {
int current_count = hmap.get(current_id);
current_count += 1;
hmap.put(current_id, current_count);
}
}
ListMultimap<Integer, String> multimap = ArrayListMultimap.create();
hmap.entrySet().forEach(entry -> {
multimap.put(entry.getValue(), entry.getKey());
});
for (int i = 0; i < n; i++){
if (!multimap.isEmpty()){
int lastKey = Iterables.getLast(multimap.keys());
String lastValue = Iterables.getLast(multimap.values());
multimap.remove(lastKey, lastValue);
System.out.println(i+1+": "+lastValue+", Occurences: "+lastKey);
}
}
}
You can do that with the java stream API :
List<String> input = Arrays.asList(new String[]{"aa", "bb", "cc", "bb", "bb", "aa"});
// First we compute a map of word -> occurrences
final Map<String, Long> collect = input.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
// Here we sort the map and collect the first 1000 entries
final List<Map.Entry<String, Long>> entries = new ArrayList<>(collect.entrySet());
final List<Map.Entry<String, Long>> result = entries.stream()
.sorted(Comparator.comparing(Map.Entry::getValue, Comparator.reverseOrder()))
.limit(1000)
.collect(Collectors.toList());
result.forEach(System.out::println);
Scenario: I have a object with 2 functions -->
<Integer> getA(); <Integer> getB();
I have a list of objects, say List<MyObject> myObject.
Objective Iterate over the List and get sum of A and B's in the List of object.
My Solution
myObject.stream().map(a -> a.getA()).collect(Collectors.summingDouble());
myObject.stream().map(a -> a.getB()).collect(Collectors.summingDouble());
The Question: How can I do both of these at the same time? This way I will not have to iterate twice over the original list.
#Edit: I was doing this because some of the filters that I used were of O(n^3). Kind of really bad to do those twice.
Benchmark : It really matters if it is T or 2T when the program runs for half hour on an i5. This was on much lesser data and if I run on a cluster, my data would be larger too.
It does matter if you can do these in one line!.
You need to write another class to store the total values like this:
public class Total {
private int totalA = 0;
private int totalB = 0;
public void addA(int a) {
totalA += a;
}
public void addB(int b) {
totalB += b;
}
public int getTotalA() {
return totalA;
}
public int getTotalB() {
return totalB;
}
}
And then collect the values using
Total total = objects.stream()
.map(o -> (A) o)
.collect(Total::new,
(t, a) -> {
t.addA(a.getA());
t.addB(a.getB());
},
(t1, t2) -> { });
//check total.getTotalA() and total.getTotalB()
You can also use AbstractMap.SimpleEntry<Integer, Integer> to replace Total to avoid writing a new class, but it's still kind of weird because A/B are not in a key-value relationship.
The most optimal probably still would be a for loop. Though the stream alternative has parallelism as option, and is likely to become as efficient soon.
Combining two loops to one is not necessarily faster.
One improvement would be to use int instead of Integer.
List<String> list = Arrays.asList("tom","terry","john","kevin","steve");
int n = list.stream().collect(Collectors.summingInt(String::length));
int h = list.stream().collect(Collectors.summingInt(String::hashCode));
I favour this solution.
If one would make one loop, there are two alternatives:
putting both ints in their own class. You might abuse java.awt.Point class with int x and int y.
putting both ints in a long. When no overflow, then one can even sum in the loop on the long.
The latter:
List<String> list = Arrays.asList("tom","terry","john","kevin","steve");
long nh = list.stream()
.collect(Collectors.summingLong((s) -> (s.hashCode() << 32) | s.length()));
int n = (int) nh;
int h = (int) (nh >> 32L);
I have the following code that I want to translate to Java 8 streams:
public ReleaseResult releaseReources() {
List<String> releasedNames = new ArrayList<>();
Stream<SomeResource> stream = this.someResources();
Iterator<SomeResource> it = stream.iterator();
while (it.hasNext() && releasedNames.size() < MAX_TO_RELEASE) {
SomeResource resource = it.next();
if (!resource.isTaken()) {
resource.release();
releasedNames.add(resource.getName());
}
}
return new ReleaseResult(releasedNames, it.hasNext(), MAX_TO_RELEASE);
}
Method someResources() returns a Stream<SomeResource> and ReleaseResult class is as follows:
public class ReleaseResult {
private int releasedCount;
private List<String> releasedNames;
private boolean hasMoreItems;
private int releaseLimit;
public ReleaseResult(List<String> releasedNames,
boolean hasMoreItems, int releaseLimit) {
this.releasedNames = releasedNames;
this.releasedCount = releasedNames.size();
this.hasMoreItems = hasMoreItems;
this.releaseLimit = releaseLimit;
}
// getters & setters
}
My attempt so far:
public ReleaseResult releaseReources() {
List<String> releasedNames = this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleasedResult(releasedNames, ???, MAX_TO_RELEASE);
}
The problem is that I can't find a way to know if there are pending resources to process. I've thought of using releasedNames.size() == MAX_TO_RELEASE, but this doesn't take into account the case where the stream of resources has exactly MAX_TO_RELEASE elements.
Is there a way to do the same with Java 8 streams?
Note: I'm not looking for answers like "you don't have to do everything with streams" or "using loops and iterators is fine". I'm OK if using an iterator and a loop is the only way or just the best way. It's just that I'd like to know if there's a non-murky way to do the same.
Since you don’t wanna hear that you don’t need streams for everything and loops and iterators are fine, let’s demonstrate it by showing a clean solution, not relying on peek:
public ReleaseResult releaseReources() {
return this.someResources()
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE+1)
.collect(
() -> new ReleaseResult(new ArrayList<>(), false, MAX_TO_RELEASE),
(result, resource) -> {
List<String> names = result.getReleasedNames();
if(names.size() == MAX_TO_RELEASE) result.setHasMoreItems(true);
else {
resource.release();
names.add(resource.getName());
}
},
(r1, r2) -> {
List<String> names = r1.getReleasedNames();
names.addAll(r2.getReleasedNames());
if(names.size() > MAX_TO_RELEASE) {
r1.setHasMoreItems(true);
names.remove(MAX_TO_RELEASE);
}
}
);
}
This assumes that // getters & setters includes getters and setters for all non-final fields of your ReleaseResult. And that getReleasedNames() returns the list by reference. Otherwise you would have to rewrite it to provide a specialized Collector having special non-public access to ReleaseResult (implementing another builder type or temporary storage would be an unnecessary complication, it looks like ReleaseResult is already designed exactly for that use case).
We could conclude that for any nontrivial loop code that doesn’t fit into the stream’s intrinsic operations, you can find a collector solution that basically does the same as the loop in its accumulator function, but suffers from the requirement of always having to provide a combiner function. Ok, in this case we can prepend a filter(…).limit(…) so it’s not that bad…
I just noticed, if you ever dare to use that with a parallel stream, you need a way to reverse the effect of releasing the last element in the combiner in case the combined size exceeds MAX_TO_RELEASE. Generally, limits and parallel processing never play well.
I don't think there's a nice way to do this. I've found a hack that does it lazily. What you can do is convert the Stream to an Iterator, convert the Iterator back to another Stream, do the Stream operations, then finally test the Iterator for a next element!
Iterator<SomeResource> it = this.someResource().iterator();
List<String> list = StreamSupport.stream(Spliterators.spliteratorUnknownSize(it, Spliterator.ORDERED), false)
.filter(resource -> !resource.isTaken())
.limit(MAX_TO_RELEASE)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
return new ReleaseResult(list, it.hasNext(), MAX_TO_RELEASE);
The only thing I can think of is
List<SomeResource> list = someResources(); // A List, rather than a Stream, is required
List<Integer> indices = IntStream.range(0, list.size())
.filter(i -> !list.get(i).isTaken())
.limit(MAX_TO_RELEASE)
.collect(Collectors.toList());
List<String> names = indices.stream()
.map(list::get)
.peek(SomeResource::release)
.map(SomeResource::getName)
.collect(Collectors.toList());
Then (I think) there are unprocessed elements if
names.size() == MAX_TO_RELEASE
&& (indices.isEmpty() || indices.get(indices.size() - 1) < list.size() - 1)
I have a Record class:
public class Record implements Comparable<Record>
{
private String myCategory1;
private int myCategory2;
private String myCategory3;
private String myCategory4;
private int myValue1;
private double myValue2;
public Record(String category1, int category2, String category3, String category4,
int value1, double value2)
{
myCategory1 = category1;
myCategory2 = category2;
myCategory3 = category3;
myCategory4 = category4;
myValue1 = value1;
myValue2 = value2;
}
// Getters here
}
I create a big list of a lot of records. Only the second and fifth values, i / 10000 and i, are used later, by the getters getCategory2() and getValue1() respectively.
List<Record> list = new ArrayList<>();
for (int i = 0; i < 115000; i++)
{
list.add(new Record("A", i / 10000, "B", "C", i, (double) i / 100 + 1));
}
Note that first 10,000 records have a category2 of 0, then next 10,000 have 1, etc., while the value1 values are 0-114999 sequentially.
I create a Stream that is both parallel and sorted.
Stream<Record> stream = list.stream()
.parallel()
.sorted(
//(r1, r2) -> Integer.compare(r1.getCategory2(), r2.getCategory2())
)
//.parallel()
;
I have a ForkJoinPool that maintains 8 threads, which is the number of cores I have on my PC.
ForkJoinPool pool = new ForkJoinPool(8);
I use the trick described here to submit a stream processing task to my own ForkJoinPool instead of the common ForkJoinPool.
List<Record> output = pool.submit(() ->
stream.collect(Collectors.toList()
)).get();
I expected that the parallel sorted operation would respect the encounter order of the stream, and that it would be a stable sort, because the Spliterator returned by ArrayList is ORDERED.
However, simple code that prints out the elements of the resultant List output in order shows that it's not quite the case.
for (Record record : output)
{
System.out.println(record.getValue1());
}
Output, condensed:
0
1
2
3
...
69996
69997
69998
69999
71875 // discontinuity!
71876
71877
71878
...
79058
79059
79060
79061
70000 // discontinuity!
70001
70002
70003
...
71871
71872
71873
71874
79062 // discontinuity!
79063
79064
79065
79066
...
114996
114997
114998
114999
The size() of output is 115000, and all elements appear to be there, just in a slightly different order.
So I wrote some checking code to see if the sort was stable. If it's stable, then all of the value1 values should remain in order. This code verifies the order, printing any discrepancies.
int prev = -1;
boolean verified = true;
for (Record record : output)
{
int curr = record.getValue1();
if (prev != -1)
{
if (prev + 1 != curr)
{
System.out.println("Warning: " + prev + " followed by " + curr + "!");
verified = false;
}
}
prev = curr;
}
System.out.println("Verified: " + verified);
Output:
Warning: 69999 followed by 71875!
Warning: 79061 followed by 70000!
Warning: 71874 followed by 79062!
Warning: 99999 followed by 100625!
Warning: 107811 followed by 100000!
Warning: 100624 followed by 107812!
Verified: false
This condition persists if I do any of the following:
Replace the ForkJoinPool with a ThreadPoolExecutor.
ThreadPoolExecutor pool = new ThreadPoolExecutor(8, 8, 0, TimeUnit.SECONDS, new ArrayBlockingQueue<>(10));
Use the common ForkJoinPool by processing the Stream directly.
List<Record> output = stream.collect(Collectors.toList());
Call parallel() after I call sorted.
Stream<Record> stream = list.stream().sorted().parallel();
Call parallelStream() instead of stream().parallel().
Stream<Record> stream = list.parallelStream().sorted();
Sort using a Comparator. Note that this sort criterion is different that the "natural" order I defined for the Comparable interface, although starting with the results already in order from the beginning, the result should still be the same.
Stream<Record> stream = list.stream().parallel().sorted(
(r1, r2) -> Integer.compare(r1.getCategory2(), r2.getCategory2())
);
I can only get this to preserve the encounter order if I don't do one of the following on the Stream:
Don't call parallel().
Don't call any overload of sorted.
Interestingly, the parallel() without a sort preserved the order.
In both of the above cases, the output is:
Verified: true
My Java version is 1.8.0_05. This anomaly also occurs on Ideone, which appears to be running Java 8u25.
Update
I've upgraded my JDK to the latest version as of this writing, 1.8.0_45, and the problem is unchanged.
Question
Is the record order in the resultant List (output) out of order because the sort is somehow not stable, because the encounter order is not preserved, or some other reason?
How can I ensure that the encounter order is preserved when I create a parallel stream and sort it?
It looks like Arrays.parallelSort isn't stable in some circumstances. Well spotted. The stream parallel sort is implemented in terms of Arrays.parallelSort, so it affects streams as well. Here's a simplified example:
public class StableSortBug {
static final int SIZE = 50_000;
static class Record implements Comparable<Record> {
final int sortVal;
final int seqNum;
Record(int i1, int i2) { sortVal = i1; seqNum = i2; }
#Override
public int compareTo(Record other) {
return Integer.compare(this.sortVal, other.sortVal);
}
}
static Record[] genArray() {
Record[] array = new Record[SIZE];
Arrays.setAll(array, i -> new Record(i / 10_000, i));
return array;
}
static boolean verify(Record[] array) {
return IntStream.range(1, array.length)
.allMatch(i -> array[i-1].seqNum + 1 == array[i].seqNum);
}
public static void main(String[] args) {
Record[] array = genArray();
System.out.println(verify(array));
Arrays.sort(array);
System.out.println(verify(array));
Arrays.parallelSort(array);
System.out.println(verify(array));
}
}
On my machine (2 core x 2 threads) this prints the following:
true
true
false
Of course, it's supposed to print true three times. This is on the current JDK 9 dev builds. I wouldn't be surprised if it occurs in all the JDK 8 releases thus far, given what you've tried. Curiously, reducing the size or the divisor will change the behavior. A size of 20,000 and a divisor of 10,000 is stable, and a size of 50,000 and a divisor of 1,000 is also stable. It seems like the problem has to do with a sufficiently large run of values comparing equal versus the parallel split size.
The OpenJDK issue JDK-8076446 covers this bug.
I'm attempting to retrieve n unique random elements for further processing from a Collection using the Streams API in Java 8, however, without much or any luck.
More precisely I'd want something like this:
Set<Integer> subList = new HashSet<>();
Queue<Integer> collection = new PriorityQueue<>();
collection.addAll(Arrays.asList(1,2,3,4,5,6,7,8,9));
Random random = new Random();
int n = 4;
while (subList.size() < n) {
subList.add(collection.get(random.nextInt()));
}
sublist.forEach(v -> v.doSomethingFancy());
I want to do it as efficiently as possible.
Can this be done?
edit: My second attempt -- although not exactly what I was aiming for:
List<Integer> sublist = new ArrayList<>(collection);
Collections.shuffle(sublist);
sublist.stream().limit(n).forEach(v -> v.doSomethingFancy());
edit: Third attempt (inspired by Holger), which will remove a lot of the overhead of shuffle if coll.size() is huge and n is small:
int n = // unique element count
List<Integer> sublist = new ArrayList<>(collection);
Random r = new Random();
for(int i = 0; i < n; i++)
Collections.swap(sublist, i, i + r.nextInt(source.size() - i));
sublist.stream().limit(n).forEach(v -> v.doSomethingFancy());
The shuffling approach works reasonably well, as suggested by fge in a comment and by ZouZou in another answer. Here's a generified version of the shuffling approach:
static <E> List<E> shuffleSelectN(Collection<? extends E> coll, int n) {
assert n <= coll.size();
List<E> list = new ArrayList<>(coll);
Collections.shuffle(list);
return list.subList(0, n);
}
I'll note that using subList is preferable to getting a stream and then calling limit(n), as shown in some other answers, because the resulting stream has a known size and can be split more efficiently.
The shuffling approach has a couple disadvantages. It needs to copy out all the elements, and then it needs to shuffle all the elements. This can be quite expensive if the total number of elements is large and the number of elements to be chosen is small.
An approach suggested by the OP and by a couple other answers is to choose elements at random, while rejecting duplicates, until the desired number of unique elements has been chosen. This works well if the number of elements to choose is small relative to the total, but as the number to choose rises, this slows down quite a bit because of the likelihood of choosing duplicates rises as well.
Wouldn't it be nice if there were a way to make a single pass over the space of input elements and choose exactly the number wanted, with the choices made uniformly at random? It turns out that there is, and as usual, the answer can be found in Knuth. See TAOCP Vol 2, sec 3.4.2, Random Sampling and Shuffling, Algorithm S.
Briefly, the algorithm is to visit each element and decide whether to choose it based on the number of elements visited and the number of elements chosen. In Knuth's notation, suppose you have N elements and you want to choose n of them at random. The next element should be chosen with probability
(n - m) / (N - t)
where t is the number of elements visited so far, and m is the number of elements chosen so far.
It's not at all obvious that this will give a uniform distribution of chosen elements, but apparently it does. The proof is left as an exercise to the reader; see Exercise 3 of this section.
Given this algorithm, it's pretty straightforward to implement it in "conventional" Java by looping over the collection and adding to the result list based on the random test. The OP asked about using streams, so here's a shot at that.
Algorithm S doesn't lend itself obviously to Java stream operations. It's described entirely sequentially, and the decision about whether to select the current element depends on a random decision plus state derived from all previous decisions. That might make it seem inherently sequential, but I've been wrong about that before. I'll just say that it's not immediately obvious how to make this algorithm run in parallel.
There is a way to adapt this algorithm to streams, though. What we need is a stateful predicate. This predicate will return a random result based on a probability determined by the current state, and the state will be updated -- yes, mutated -- based on this random result. This seems hard to run in parallel, but at least it's easy to make thread-safe in case it's run from a parallel stream: just make it synchronized. It'll degrade to running sequentially if the stream is parallel, though.
The implementation is pretty straightforward. Knuth's description uses random numbers between 0 and 1, but the Java Random class lets us choose a random integer within a half-open interval. Thus all we need to do is keep counters of how many elements are left to visit and how many are left to choose, et voila:
/**
* A stateful predicate that, given a total number
* of items and the number to choose, will return 'true'
* the chosen number of times distributed randomly
* across the total number of calls to its test() method.
*/
static class Selector implements Predicate<Object> {
int total; // total number items remaining
int remain; // number of items remaining to select
Random random = new Random();
Selector(int total, int remain) {
this.total = total;
this.remain = remain;
}
#Override
public synchronized boolean test(Object o) {
assert total > 0;
if (random.nextInt(total--) < remain) {
remain--;
return true;
} else {
return false;
}
}
}
Now that we have our predicate, it's easy to use in a stream:
static <E> List<E> randomSelectN(Collection<? extends E> coll, int n) {
assert n <= coll.size();
return coll.stream()
.filter(new Selector(coll.size(), n))
.collect(toList());
}
An alternative also mentioned in the same section of Knuth suggests choosing an element at random with a constant probability of n / N. This is useful if you don't need to choose exactly n elements. It'll choose n elements on average, but of course there will be some variation. If this is acceptable, the stateful predicate becomes much simpler. Instead of writing a whole class, we can simply create the random state and capture it from a local variable:
/**
* Returns a predicate that evaluates to true with a probability
* of toChoose/total.
*/
static Predicate<Object> randomPredicate(int total, int toChoose) {
Random random = new Random();
return obj -> random.nextInt(total) < toChoose;
}
To use this, replace the filter line in the stream pipeline above with
.filter(randomPredicate(coll.size(), n))
Finally, for comparison purposes, here's an implementation of the selection algorithm written using conventional Java, that is, using a for-loop and adding to a collection:
static <E> List<E> conventionalSelectN(Collection<? extends E> coll, int remain) {
assert remain <= coll.size();
int total = coll.size();
List<E> result = new ArrayList<>(remain);
Random random = new Random();
for (E e : coll) {
if (random.nextInt(total--) < remain) {
remain--;
result.add(e);
}
}
return result;
}
This is quite straightforward, and there's nothing really wrong with this. It's simpler and more self-contained than the stream approach. Still, the streams approach illustrates some interesting techniques that might be useful in other contexts.
Reference:
Knuth, Donald E. The Art of Computer Programming: Volume 2, Seminumerical Algorithms, 2nd edition. Copyright 1981, 1969 Addison-Wesley.
You could always create a "dumb" comparator, that will compare elements randomly in the list. Calling distinct() will ensure you that the elements are unique (from the queue).
Something like this:
static List<Integer> nDistinct(Collection<Integer> queue, int n) {
final Random rand = new Random();
return queue.stream()
.distinct()
.sorted(Comparator.comparingInt(a -> rand.nextInt()))
.limit(n)
.collect(Collectors.toList());
}
However I'm not sure it will be more efficient that putting the elements in the list, shuffling it and return a sublist.
static List<Integer> nDistinct(Collection<Integer> queue, int n) {
List<Integer> list = new ArrayList<>(queue);
Collections.shuffle(list);
return list.subList(0, n);
}
Oh, and it's probably semantically better to return a Set instead of a List since the elements are distincts. The methods are also designed to take Integers, but there's no difficulty to design them to be generic. :)
Just as a note, the Stream API looks like a tool box that we could use for everything, however that's not always the case. As you see, the second method is more readable (IMO), probably more efficient and doesn't have much more code (even less!).
As an addendum to the shuffle approach of the accepted answer:
If you want to select only a few items from a large list and want to avoid the overhead of shuffling the entire list you can solve the task as follows:
public static <T> List<T> getRandom(List<T> source, int num) {
Random r=new Random();
for(int i=0; i<num; i++)
Collections.swap(source, i, i+r.nextInt(source.size()-i));
return source.subList(0, num);
}
What it does is very similar to what shuffle does but it reduces it’s action to having only num random elements rather than source.size() random elements…
You can use limit to solve your problem.
http://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#limit-long-
Collections.shuffle(collection);
int howManyDoYouWant = 10;
List<Integer> smallerCollection = collection
.stream()
.limit(howManyDoYouWant)
.collect(Collectors.toList());
List<Integer> collection = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
int n = 4;
Random random = ThreadLocalRandom.current();
random.ints(0, collection.size())
.distinct()
.limit(n)
.mapToObj(collection::get)
.forEach(System.out::println);
This will of course have the overhead of the intermediate set of indexes and it will hang forever if n > collection.size().
If you want to avoid any non-constatn overhead, you'll have to make a stateful Predicate.
It should be clear that streaming the collection is not what you want.
Use the generate() and limit methods:
Stream.generate(() -> list.get(new Random().nextInt(list.size())).limit(3).forEach(...);
If you want to process the whole Stream without too much hassle, you can simply create your own Collector using Collectors.collectingAndThen():
public static <T> Collector<T, ?, Stream<T>> toEagerShuffledStream() {
return Collectors.collectingAndThen(
toList(),
list -> {
Collections.shuffle(list);
return list.stream();
});
}
But this won't perform well if you want to limit() the resulting Stream. In order to overcome this, one could create a custom Spliterator:
package com.pivovarit.stream;
import java.util.List;
import java.util.Random;
import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.function.Supplier;
public class ImprovedRandomSpliterator<T> implements Spliterator<T> {
private final Random random;
private final T[] source;
private int size;
ImprovedRandomSpliterator(List<T> source, Supplier<? extends Random> random) {
if (source.isEmpty()) {
throw new IllegalArgumentException("RandomSpliterator can't be initialized with an empty collection");
}
this.source = (T[]) source.toArray();
this.random = random.get();
this.size = this.source.length;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
int nextIdx = random.nextInt(size);
int lastIdx = size - 1;
action.accept(source[nextIdx]);
source[nextIdx] = source[lastIdx];
source[lastIdx] = null; // let object be GCed
return --size > 0;
}
#Override
public Spliterator<T> trySplit() {
return null;
}
#Override
public long estimateSize() {
return source.length;
}
#Override
public int characteristics() {
return SIZED;
}
}
and then:
public final class RandomCollectors {
private RandomCollectors() {
}
public static <T> Collector<T, ?, Stream<T>> toImprovedLazyShuffledStream() {
return Collectors.collectingAndThen(
toCollection(ArrayList::new),
list -> !list.isEmpty()
? StreamSupport.stream(new ImprovedRandomSpliterator<>(list, Random::new), false)
: Stream.empty());
}
public static <T> Collector<T, ?, Stream<T>> toEagerShuffledStream() {
return Collectors.collectingAndThen(
toCollection(ArrayList::new),
list -> {
Collections.shuffle(list);
return list.stream();
});
}
}
And then you could use it like:
stream
.collect(toLazyShuffledStream()) // or toEagerShuffledStream() depending on the use case
.distinct()
.limit(42)
.forEach( ... );
A detailed explanation can be found here.
If you want a random sample of elements from a stream, a lazy alternative to shuffling might be a filter based on the uniform distribution:
...
import org.apache.commons.lang3.RandomUtils
// If you don't know ntotal, just use a 0-1 ratio
var relativeSize = nsample / ntotal;
Stream.of (...) // or any other stream
.parallel() // can work in parallel
.filter ( e -> Math.random() < relativeSize )
// or any other stream operation
.forEach ( e -> System.out.println ( "I've got: " + e ) );