So I was trying to order a Map, and I learned that there is a thing in java called Comparator that helps me do it, but for some reason it appears to have no error but the program doesn't work... It's appears to have an infinite cicle but I don't know where.
public long[] top_most_active (TCD_community com, int N){
int i=0;
long[] array= new long[N];
Arrays.fill (array,0);
List<User> l = new ArrayList<>();
l = topX (N, com);
for (User u : l) {
array[i] = u.getid();
System.out.println (u.getid());
i++;
}
return array;
}
I want to order everything and pick up to my list the top N users with most posts!
public List<User> topX(int x , TCD_community com) {
List<User> l = new ArrayList<>();
Map<Integer, User> m = com.getUsers();
for (Map.Entry<Integer, User> entrada : m.entrySet()) {
User u = entrada.getValue();
l.add(u);
l.sort(new Comparator<User>() {
public int compare(User u1, User u2) {
if(u1.getpost_count() == u2.getpost_count()){
return 0;
}
else if (u1.getpost_count() > u2.getpost_count())
return -1;
else return 1;
}
});
}
return l.subList(0,x);
}
You're overcomplicated things. Consider the following:
The top_most_active method becomes:
public long[] top_most_active (TCD_community com, int N){
long[] result = topX(N, com).stream()
.mapToLong(User::getId)
.peek(id -> System.out.println(id))
.toArray();
}
then the topX method becomes:
public List<User> topX(int x , TCD_community com) {
return com.getUsers()
.stream()
.map(e -> e.getValue())
.sorted(Comparator.comparingLong(User::getpost_count).reversed())
.limit(x)
.collect(Collectors.toCollection(ArrayList::new));
}
This fixes the issue of you trying to sort the accumulating list in each iteration of the loop as well as removing most of the redundancy and boilerplate code you currently have.
You are trying to sort your list after adding each entry to the list.
You should simply collect all the users into a new list, and afterwards sort the list.
List<User> l = new ArrayList<>(com.getUsers().values());
l.sort( ... your comparator ...);
return l.subList(0, x);
Okay, so it seems you don't want to order a Map, you want to order a list.
What I think your code does now is a number of things: (1) It deals with a Map for some reason. I guess it's like Python enumerate(your_list) giving your indices, but that's unnecessary. (2) It also seemingly re-sorts the list at every single add. You don't need to do that.
Simply get all your elements and sort them later in one go. Getting things is a relatively simple task. Comparing and ordering them is relatively expensive, so you want to minimise the times you have to do that.
However it is, to order your list, just implement Comparator and invoke a sorting method with it. Doing this is easy now, since we have both natural order in int (and Integer, long, Long, etc.) and easy comparator creation via Comparator#comparing.
Get your list of users. Then,
Map<Integer, User> map = magicSupplier();
List<User> users = new ArrayList<>(map.values());
Comparator<User> comparator = Comparator.comparing(User::postCount)
Collections.sort(users, comparator.reversed()); // for most-to-least
With the assumption that your User object has a method postCount returning a naturally ordered integer or long that counts the number of posts. I think you call it something different.
Then do your sublisting and return.
Related
I have a map which consists of 3 lists as follows.
User user = new User();
user.setId(1);
// more user creation
List<User> usersOne = Arrays.asList(user1, user2, user3);
// more lists created
// This is the map with 3 lists. Adding data to map
Map<String, List<User>> map = new HashMap<>();
map.put("key1", usersOne);
map.put("key2", usersTwo);
map.put("key3", usersThree);
Above is just to show example. This map is constructed this way and coming from a rest call.
Is there a way I could loop over those lists for all 3 keys and add per item to a new list?
Meaning like this.
List<User> data = new ArrayList<>();
List<User> list1 = map.get("key1");
List<User> list2 = map.get("key2");
List<User> list3 = map.get("key3");
data.add(list1.get(0));
data.add(list2.get(0));
data.add(list3.get(0));
// and then
data.add(list1.get(1));
data.add(list2.get(1));
data.add(list3.get(1));
and so on. Or a better way to do it. Ultimate looking to get a new list of the users by getting them in this manner, 1 from each list and then move on to next index.
Note that the 3 lists are not of same length.
Was looking to see if I could achieve it via something like the following.
But looks expensive. Could I get some advice on how I could achieve this?
for (User list1User : list1) {
for (User list2User : list2) {
for (User list3User: list3) {
// write logic in here since I now do have access to all 3 lists.
// but is expensive plus also going to run into issues since the length is not the same for all 3 lists.
}
}
}
Simple solution for your problem is "use index".
int size1 = list1.size();
int size2 = list2.size();
int size3 = list3.size();
//Choose maxSize of (size1,size2,size3)
int maxSize = Math.max(size1,size2);
maxSize = Math.max(maxSize,size3);
then use single loop, only add with condition i < listSize:
for(int i=0;i<maxSize;i++){
if(i < size1) data.add(list1.get(i);
if(i < size2) data.add(list2.get(i);
if(i < size3) data.add(list3.get(i);
}
In case you have more than 3 list in map:
Collection<List<Object>> allValues = map.values();
int maxSize = allValues.stream().map(list-> list.size()).max(Integer::compare).get();
List<Object> data = new ArrayList<>();
for(int i=0;i<maxSize;i++) {
for(List<Object> list: allValues) {
if(i < list.size()) data.add(list.get(i));
}
}
Ps: you might get warning because i am using notepad++ not editor tool.
Not very simple but very concise and pragmatic would be to extract the lists from the map, flatten them and then collect them. Since Java 8 we can utilize streams for that.
List<User> data = map.values().stream()
.flatMap(users -> users.stream())
.collect(Collectors.toList());
With .values() you get a Set<List<User>> of your UserLists, i assume, that you don't need the keys.
With .flatMap() we gather all users from each list. And we use flatMap() instead of map() because we want to retrieve the users from each list.
The last method-call .collect(Collectors.toList() collects all elements from this stream and puts them into a read-only-list.
In this case, you would preserve multiple occurrences of the same user, which you wouldn't if you collect them as a set collect(Collectos.asSet()), given they are really different objects.
I am trying to find the integer that appears an odd numbers of time, but somehow the tests on qualified.io are not returning true. May be there is something wrong with my logic?
The problem is that in an array [5,1,1,5,2,2,5] the number 5 appears 3 times, therefore the answer is 5. The method signature wants me to use List<>. So my code is below.
public static List<Integer> findOdd( List<Integer> integers ) {
int temp = integers.size();
if (integers.size() % 2 == 0) {
//Do something here.
}
return integers;
}
}
I need to understand couple things. What is the best way to check all elements inside integers list, and iterate over to see if any similar element is present, if yes, return that element.
If you are allowed to use java 8, you can use streams and collectors for this:
Map<Integer, Long> collect = list.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()));
Given a list with integers, this code will generate a map, where the key is the actual number and value is number of repetitions.
You just have to iterate through map and find out what are you interested in.
You want to set up a data structure that will let you count every integer that appears in the list. Then iterate through your list and do the counting. When you're done, check your data structure for all integers that occur an odd number of times and add them to your list to return.
Something like:
public static List<Integer> findOdd(List<Integer> integers) {
Map<Integer, MutableInt> occurrences = new HashMap<>(); // Count occurrences of each integer
for (Integer i : integers) {
if (occurrences.containsKey(i)) {
occurrences.get(i).increment();
} else {
occurrences.put(i, new MutableInt(1));
}
}
List<Integer> answer = new ArrayList<>();
for (Integer i : occurrences.keySet()) {
if ((occurrences.get(i) % 2) == 1) { // It's odd
answer.add(i)
}
}
return answer;
}
MutableInt is an Apache Commons class. You can do it with plain Integers, but you have to replace the value each time.
If you've encountered streams before you can change the second half of the answer above (the odd number check) to something like:
return occurrences.entrySet().stream()
.filter(i -> i % 2 == 1)
.collect(Collectors.toList());
Note: I haven't compiled any of this myself so you may need to tweak it a bit.
int findOdd(int[] nums) {
Map<Integer, Boolean>evenNumbers = new HashMap<>();
nums.forEach(num -> {
Boolean status = evenNumbers.get(num);
if(status == null) {
evenNumbers.put(num, false);
}else{
evenNumbers.put(num, !status);
}
});
// Map holds true for all values with even occurrences
Iterator<Integer> it = evenNumbers.keySet().iterator();
while(it.hasNext()){
Integer key = it.next();
Boolean next = evenNumbers.get(key);
if(next == false){
return key;
}
}
}
You could use the reduce method from the IntStream package.
Example:
stream(ints).reduce(0, (x, y) -> x ^ y);
I am having a hard time understanding the right syntax to sort Maps which values aren't simply one type, but can be nested again.
I'll try to come up with a fitting example here:
Let's make a random class for that first:
class NestedFoo{
int valA;
int valB;
String textA;
public NestedFoo(int a, int b, String t){
this.valA = a;
this.valB = b;
this.textA = t;
}
}
Alright, that is our class.
Here comes the list:
HashMap<Integer, ArrayList<NestedFoo>> sortmePlz = new HashMap<>();
Let's create 3 entries to start with, that should show sorting works already.
ArrayList<NestedFoo> l1 = new ArrayList<>();
n1 = new NestedFoo(3,2,"a");
n2 = new NestedFoo(2,2,"a");
n3 = new NestedFoo(1,4,"c");
l1.add(n1);
l1.add(n2);
l1.add(n3);
ArrayList<NestedFoo> l2 = new ArrayList<>();
n1 = new NestedFoo(3,2,"a");
n2 = new NestedFoo(2,2,"a");
n3 = new NestedFoo(2,2,"b");
n4 = new NestedFoo(1,4,"c");
l2.add(n1);
l2.add(n2);
l2.add(n3);
l2.add(n4);
ArrayList<NestedFoo> l3 = new ArrayList<>();
n1 = new NestedFoo(3,2,"a");
n2 = new NestedFoo(2,3,"b");
n3 = new NestedFoo(2,2,"b");
n4 = new NestedFoo(5,4,"c");
l3.add(n1);
l3.add(n2);
l3.add(n3);
l3.add(n4);
Sweet, now put them in our Map.
sortmePlz.put(5,l1);
sortmePlz.put(2,l2);
sortmePlz.put(1,l3);
What I want now, is to sort the Entire Map first by its Keys, so the order should be l3 l2 l1.
Then, I want the lists inside each key to be sorted by the following Order:
intA,intB,text (all ascending)
I have no idea how to do this. Especially not since Java 8 with all those lambdas, I tried to read on the subject but feel overwhelmed by the code there.
Thanks in advance!
I hope the code has no syntatical errors, I made it up on the go
You can use TreeSet instead of regular HashMap and your values will be automatically sorted by key:
Map<Integer, ArrayList<NestedFoo>> sortmePlz = new TreeMap<>();
Second step I'm a little confused.
to be sorted by the following Order: intA,intB,text (all ascending)
I suppose you want to sort the list by comparing first the intA values, then if they are equal compare by intB and so on. If I understand you correctly you can use Comparator with comparing and thenComparing.
sortmePlz.values().forEach(list -> list
.sort(Comparator.comparing(NestedFoo::getValA)
.thenComparing(NestedFoo::getValB)
.thenComparing(NestedFoo::getTextA)));
I'm sure there are way of doing it with lambda but it is not actually required. See answer from Schidu Luca for a lambda like solution.
Keep reading if you want an 'old school solution'.
You cannot sort a map. It does not make sense because there is no notion of order in a map. Now, there are some map objects that store the key in a sorted way (like the TreeMap).
You can order a list. In your case, makes the class NestedFoo comparable (https://docs.oracle.com/javase/8/docs/api/java/lang/Comparable.html). Then you can invoke the method Collections.sort (https://docs.oracle.com/javase/8/docs/api/java/util/Collections.html#sort-java.util.List-) on your lists.
Use TreeMap instead of HashMap, it solves the 1st problem: ordering entries by key.
After getting the needed list from the Map, you can sort the ArrayList by valA, valB, text:
l1.sort(
Comparator.comparing(NestedFoo::getValA).thenComparing(NestedFoo::getValB).thenComparing(NestedFoo::getTextA)
);
And change your NestedFoo class definition like this:
class NestedFoo {
int valA;
int valB;
String textA;
public NestedFoo(int a, int b, String t) {
this.valA = a;
this.valB = b;
this.textA = t;
}
public int getValA() {
return valA;
}
public void setValA(int valA) {
this.valA = valA;
}
public int getValB() {
return valB;
}
public void setValB(int valB) {
this.valB = valB;
}
public String getTextA() {
return textA;
}
public void setTextA(String textA) {
this.textA = textA;
}
}
When using treemap for sorting keep in mind that treemap uses compareTo instead of equals for sorting and to find duplicity. compareTo should be incosistent with equals and hashcode when implemented for any object which will be used as key. You can look for a detailed example on this link https://codingninjaonline.com/2017/09/29/unexpected-results-for-treemap-with-inconsistent-compareto-and-equals/
I'm attempting to retrieve n unique random elements for further processing from a Collection using the Streams API in Java 8, however, without much or any luck.
More precisely I'd want something like this:
Set<Integer> subList = new HashSet<>();
Queue<Integer> collection = new PriorityQueue<>();
collection.addAll(Arrays.asList(1,2,3,4,5,6,7,8,9));
Random random = new Random();
int n = 4;
while (subList.size() < n) {
subList.add(collection.get(random.nextInt()));
}
sublist.forEach(v -> v.doSomethingFancy());
I want to do it as efficiently as possible.
Can this be done?
edit: My second attempt -- although not exactly what I was aiming for:
List<Integer> sublist = new ArrayList<>(collection);
Collections.shuffle(sublist);
sublist.stream().limit(n).forEach(v -> v.doSomethingFancy());
edit: Third attempt (inspired by Holger), which will remove a lot of the overhead of shuffle if coll.size() is huge and n is small:
int n = // unique element count
List<Integer> sublist = new ArrayList<>(collection);
Random r = new Random();
for(int i = 0; i < n; i++)
Collections.swap(sublist, i, i + r.nextInt(source.size() - i));
sublist.stream().limit(n).forEach(v -> v.doSomethingFancy());
The shuffling approach works reasonably well, as suggested by fge in a comment and by ZouZou in another answer. Here's a generified version of the shuffling approach:
static <E> List<E> shuffleSelectN(Collection<? extends E> coll, int n) {
assert n <= coll.size();
List<E> list = new ArrayList<>(coll);
Collections.shuffle(list);
return list.subList(0, n);
}
I'll note that using subList is preferable to getting a stream and then calling limit(n), as shown in some other answers, because the resulting stream has a known size and can be split more efficiently.
The shuffling approach has a couple disadvantages. It needs to copy out all the elements, and then it needs to shuffle all the elements. This can be quite expensive if the total number of elements is large and the number of elements to be chosen is small.
An approach suggested by the OP and by a couple other answers is to choose elements at random, while rejecting duplicates, until the desired number of unique elements has been chosen. This works well if the number of elements to choose is small relative to the total, but as the number to choose rises, this slows down quite a bit because of the likelihood of choosing duplicates rises as well.
Wouldn't it be nice if there were a way to make a single pass over the space of input elements and choose exactly the number wanted, with the choices made uniformly at random? It turns out that there is, and as usual, the answer can be found in Knuth. See TAOCP Vol 2, sec 3.4.2, Random Sampling and Shuffling, Algorithm S.
Briefly, the algorithm is to visit each element and decide whether to choose it based on the number of elements visited and the number of elements chosen. In Knuth's notation, suppose you have N elements and you want to choose n of them at random. The next element should be chosen with probability
(n - m) / (N - t)
where t is the number of elements visited so far, and m is the number of elements chosen so far.
It's not at all obvious that this will give a uniform distribution of chosen elements, but apparently it does. The proof is left as an exercise to the reader; see Exercise 3 of this section.
Given this algorithm, it's pretty straightforward to implement it in "conventional" Java by looping over the collection and adding to the result list based on the random test. The OP asked about using streams, so here's a shot at that.
Algorithm S doesn't lend itself obviously to Java stream operations. It's described entirely sequentially, and the decision about whether to select the current element depends on a random decision plus state derived from all previous decisions. That might make it seem inherently sequential, but I've been wrong about that before. I'll just say that it's not immediately obvious how to make this algorithm run in parallel.
There is a way to adapt this algorithm to streams, though. What we need is a stateful predicate. This predicate will return a random result based on a probability determined by the current state, and the state will be updated -- yes, mutated -- based on this random result. This seems hard to run in parallel, but at least it's easy to make thread-safe in case it's run from a parallel stream: just make it synchronized. It'll degrade to running sequentially if the stream is parallel, though.
The implementation is pretty straightforward. Knuth's description uses random numbers between 0 and 1, but the Java Random class lets us choose a random integer within a half-open interval. Thus all we need to do is keep counters of how many elements are left to visit and how many are left to choose, et voila:
/**
* A stateful predicate that, given a total number
* of items and the number to choose, will return 'true'
* the chosen number of times distributed randomly
* across the total number of calls to its test() method.
*/
static class Selector implements Predicate<Object> {
int total; // total number items remaining
int remain; // number of items remaining to select
Random random = new Random();
Selector(int total, int remain) {
this.total = total;
this.remain = remain;
}
#Override
public synchronized boolean test(Object o) {
assert total > 0;
if (random.nextInt(total--) < remain) {
remain--;
return true;
} else {
return false;
}
}
}
Now that we have our predicate, it's easy to use in a stream:
static <E> List<E> randomSelectN(Collection<? extends E> coll, int n) {
assert n <= coll.size();
return coll.stream()
.filter(new Selector(coll.size(), n))
.collect(toList());
}
An alternative also mentioned in the same section of Knuth suggests choosing an element at random with a constant probability of n / N. This is useful if you don't need to choose exactly n elements. It'll choose n elements on average, but of course there will be some variation. If this is acceptable, the stateful predicate becomes much simpler. Instead of writing a whole class, we can simply create the random state and capture it from a local variable:
/**
* Returns a predicate that evaluates to true with a probability
* of toChoose/total.
*/
static Predicate<Object> randomPredicate(int total, int toChoose) {
Random random = new Random();
return obj -> random.nextInt(total) < toChoose;
}
To use this, replace the filter line in the stream pipeline above with
.filter(randomPredicate(coll.size(), n))
Finally, for comparison purposes, here's an implementation of the selection algorithm written using conventional Java, that is, using a for-loop and adding to a collection:
static <E> List<E> conventionalSelectN(Collection<? extends E> coll, int remain) {
assert remain <= coll.size();
int total = coll.size();
List<E> result = new ArrayList<>(remain);
Random random = new Random();
for (E e : coll) {
if (random.nextInt(total--) < remain) {
remain--;
result.add(e);
}
}
return result;
}
This is quite straightforward, and there's nothing really wrong with this. It's simpler and more self-contained than the stream approach. Still, the streams approach illustrates some interesting techniques that might be useful in other contexts.
Reference:
Knuth, Donald E. The Art of Computer Programming: Volume 2, Seminumerical Algorithms, 2nd edition. Copyright 1981, 1969 Addison-Wesley.
You could always create a "dumb" comparator, that will compare elements randomly in the list. Calling distinct() will ensure you that the elements are unique (from the queue).
Something like this:
static List<Integer> nDistinct(Collection<Integer> queue, int n) {
final Random rand = new Random();
return queue.stream()
.distinct()
.sorted(Comparator.comparingInt(a -> rand.nextInt()))
.limit(n)
.collect(Collectors.toList());
}
However I'm not sure it will be more efficient that putting the elements in the list, shuffling it and return a sublist.
static List<Integer> nDistinct(Collection<Integer> queue, int n) {
List<Integer> list = new ArrayList<>(queue);
Collections.shuffle(list);
return list.subList(0, n);
}
Oh, and it's probably semantically better to return a Set instead of a List since the elements are distincts. The methods are also designed to take Integers, but there's no difficulty to design them to be generic. :)
Just as a note, the Stream API looks like a tool box that we could use for everything, however that's not always the case. As you see, the second method is more readable (IMO), probably more efficient and doesn't have much more code (even less!).
As an addendum to the shuffle approach of the accepted answer:
If you want to select only a few items from a large list and want to avoid the overhead of shuffling the entire list you can solve the task as follows:
public static <T> List<T> getRandom(List<T> source, int num) {
Random r=new Random();
for(int i=0; i<num; i++)
Collections.swap(source, i, i+r.nextInt(source.size()-i));
return source.subList(0, num);
}
What it does is very similar to what shuffle does but it reduces it’s action to having only num random elements rather than source.size() random elements…
You can use limit to solve your problem.
http://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#limit-long-
Collections.shuffle(collection);
int howManyDoYouWant = 10;
List<Integer> smallerCollection = collection
.stream()
.limit(howManyDoYouWant)
.collect(Collectors.toList());
List<Integer> collection = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
int n = 4;
Random random = ThreadLocalRandom.current();
random.ints(0, collection.size())
.distinct()
.limit(n)
.mapToObj(collection::get)
.forEach(System.out::println);
This will of course have the overhead of the intermediate set of indexes and it will hang forever if n > collection.size().
If you want to avoid any non-constatn overhead, you'll have to make a stateful Predicate.
It should be clear that streaming the collection is not what you want.
Use the generate() and limit methods:
Stream.generate(() -> list.get(new Random().nextInt(list.size())).limit(3).forEach(...);
If you want to process the whole Stream without too much hassle, you can simply create your own Collector using Collectors.collectingAndThen():
public static <T> Collector<T, ?, Stream<T>> toEagerShuffledStream() {
return Collectors.collectingAndThen(
toList(),
list -> {
Collections.shuffle(list);
return list.stream();
});
}
But this won't perform well if you want to limit() the resulting Stream. In order to overcome this, one could create a custom Spliterator:
package com.pivovarit.stream;
import java.util.List;
import java.util.Random;
import java.util.Spliterator;
import java.util.function.Consumer;
import java.util.function.Supplier;
public class ImprovedRandomSpliterator<T> implements Spliterator<T> {
private final Random random;
private final T[] source;
private int size;
ImprovedRandomSpliterator(List<T> source, Supplier<? extends Random> random) {
if (source.isEmpty()) {
throw new IllegalArgumentException("RandomSpliterator can't be initialized with an empty collection");
}
this.source = (T[]) source.toArray();
this.random = random.get();
this.size = this.source.length;
}
#Override
public boolean tryAdvance(Consumer<? super T> action) {
int nextIdx = random.nextInt(size);
int lastIdx = size - 1;
action.accept(source[nextIdx]);
source[nextIdx] = source[lastIdx];
source[lastIdx] = null; // let object be GCed
return --size > 0;
}
#Override
public Spliterator<T> trySplit() {
return null;
}
#Override
public long estimateSize() {
return source.length;
}
#Override
public int characteristics() {
return SIZED;
}
}
and then:
public final class RandomCollectors {
private RandomCollectors() {
}
public static <T> Collector<T, ?, Stream<T>> toImprovedLazyShuffledStream() {
return Collectors.collectingAndThen(
toCollection(ArrayList::new),
list -> !list.isEmpty()
? StreamSupport.stream(new ImprovedRandomSpliterator<>(list, Random::new), false)
: Stream.empty());
}
public static <T> Collector<T, ?, Stream<T>> toEagerShuffledStream() {
return Collectors.collectingAndThen(
toCollection(ArrayList::new),
list -> {
Collections.shuffle(list);
return list.stream();
});
}
}
And then you could use it like:
stream
.collect(toLazyShuffledStream()) // or toEagerShuffledStream() depending on the use case
.distinct()
.limit(42)
.forEach( ... );
A detailed explanation can be found here.
If you want a random sample of elements from a stream, a lazy alternative to shuffling might be a filter based on the uniform distribution:
...
import org.apache.commons.lang3.RandomUtils
// If you don't know ntotal, just use a 0-1 ratio
var relativeSize = nsample / ntotal;
Stream.of (...) // or any other stream
.parallel() // can work in parallel
.filter ( e -> Math.random() < relativeSize )
// or any other stream operation
.forEach ( e -> System.out.println ( "I've got: " + e ) );
put simply my question is:
What is the most efficient way to retrieve a random discontinuous sublist of an ArrayList with a given size.
The following is my own design, which works but seems a little clunky to me. It is my first JAVA program by the way so please excuse if my code or question is not according to best practice ;)
Remarks:
- my list does not contain duplicates
- I guessed that it might be faster to remove items instead of adding them if the AimSize is more than half of the size of the original list
public ArrayList<Vokabel> subList(int AimSize) {
ArrayList<Vokabel> tempL = new ArrayList<Vokabel>();
Random r = new Random();
LinkedHashSet<Vokabel> tempS = new LinkedHashSet<Vokabel>();
tempL = originalList;
// If the size is to big just leave the list and change size
// (in the real code there is no pass-by-value problem ;)
if (!(tempL.size() > AimSize)) {
AimSize = tempL.size();
// set to avoid duplicates and get a random order
} else if (2* AimSize < tempL.size()) {
while (tempS.size() < AimSize) {
tempS.add(tempL.get(r.nextInt(tempL.size())));
}
tempL = new ArrayList<Vokabel>(tempS);
// little optimization if it involves less loops
// to delete entries to get to the right size, than to add them
// the List->Set->List conversion at the end is there to reorder the items
} else {
while (tempL.size() > AimSize) {
tempL.remove(r.nextInt(tempL.size()));
}
tempL = new ArrayList<Vokabel>(new LinkedHashSet<Vokabel>(tempL));
}
return tempL;
}
Warning: This is coded in my browser. It may not even compile!
Using Collections.shuffle and List.subList will do the job.
public static <T> List<T> randomSubList(List<T> list, int newSize) {
list = new ArrayList<>(list);
Collections.shuffle(list);
return list.subList(0, newSize);
}
Another solution using Streams of Java 8 would be this
public static <T> List<T> selectRandomElements(List<T> list, int amount) {
if( amount > list.size())
throw new IllegalArgumentException("amount can't be bigger than list size");
return Stream.generate( () -> list.get( (int) ( Math.random() * list.size() ) ) )
.distinct()
.limit( amount )
.collect( Collectors.toList() );
}