I want to sort a few variables (int) by size with the added condition, that when two are equal they should be sorted alphabetically. More specifically:
I have the following method:
public void doSomething(int carrots, int mushrooms, int salads, int tomatoes) {
//Here I want to print them in the right order
}
The printed lines should have the format: "mushrooms: [amount]"
And only lines which are not 0 should be printed.
These things I can manage on my own however.
I have tried to put them all in a List and sort them, but then its basically impossible to map them back to their names.
I don't really know where to go from here.
Use a map:
Map<String, Integer> counts = Map.of(
"carrots", carrots,
"mushrooms", mushrooms,
"salads", salads,
"tomatoes", tomatoes
);
counts.entrySet().stream() // Stream<Map.Entry<String, Integer>>
.filter(e -> e.getValue() > 0)
.sorted(Map.Entry.<String, Integer>comparingByValue().thenComparing(Map.Entry.comparingByKey()))
.forEach(e -> System.out.printf("%s: [%d]%n", e.getKey(), e.getValue()));
I know that .<String, Integer> is ugly, but Java's type inference can't see past the first call in method chains. This tells the compiler that the the first method in the method chain uses String and Integer as generic types.
This is a bit of an academic approach. The other answer is a better practical approach.
Since the arguments are already alphabetical, and java's sort algorithm is stable. ie if the two values are the same, then they'll stay in the same order, then.
List<Integer> amount = new ArrayList<>(List.of(carrots, mushrooms, salads, tomatoes));
List<Integer> sorted new ArrayList<>(amount);
sorted.sort(Integer::compare);
Now you can map the items back because the order doesn't change when they're equal.
String[] names = {"carrots", "mushrooms", "salads", "tomatoes"};
for(Integer i: sorted){
if(i == 0) continue; //skip zero values.
int j = amount.indexOf(i);
System.out.println( names[j] + " : " + i );
amount.set(j, -1);
}
If the values are all unique, then it is obvious how this works. It finds the index of the value, and prints the corresponding name.
When there are duplicates, the alphabetically lowest name is printed first. The value is replaced with -1 so that subsequent calls to "indexOf" return the next valid index.
Related
I want to sort the following example list which currently contains only Strings with my own custom rules.
ArrayList<String> coll = new ArrayList<>();
coll.add("just");
coll.add("sdsd");
coll.add("asb");
coll.add("b as");
coll.add("just");
coll.add("dhfga");
coll.add("jusht");
coll.add("ktsa");
coll.add("just");
coll.add("just");
I know that I could write my own comparator for this, but as I know that Java also got comparators which solve this problem partially I want to know how I can use the ones from the Java API in combination with my own one.
How should it be sorted?
The word just should always be the first word to appear in the list followed by all other words in alphabetical order.
Comparator.naturalOrder() sorts the list in alphabetical order, but how can I combine this comperator with a custom one which checks whether the word is just or something else.
You can do this something like that:
coll.sort(Comparator
.comparingInt((String s) -> s.equals("just") ? 0 : 1) // Words "just" first
.thenComparing(Comparator.naturalOrder())); // Then others
You could integrate the criteria into the comparator like
coll.sort(Comparator.comparing((String s) -> !s.equals("just"))
.thenComparing(Comparator.naturalOrder()));
or you separate the operations, first moving all occurrences of "just" to the front, then sorting the remaining elements only:
int howManyJust = 0;
for(int ix = 0, num = coll.size(); ix < num; ix++)
if(coll.get(ix).equals("just") && ++howManyJust <= ix)
Collections.swap(coll, ix, howManyJust-1);
coll.subList(howManyJust, coll.size()).sort(Comparator.naturalOrder());
while this may look more complicated, it is potentially more efficient, especially for larger lists.
The first step should be to define the custom order. I would do that by using a Map.
Map<String, Integer> orderMap = new HashMap<>();
int order = 0;
for(String specialWord : yourListOfSpecialWords){
orderMap.put(specialWord, order++);
}
Now build comparator using that map and natural order as backup:
Comparator<String> comparator = ((Comparator<String>) (o1, o2) -> {
int leftScore = orderMap.getOrDefault(o1, Integer.MAX_VALUE);
int rightScore = orderMap.getOrDefault(o2, Integer.MAX_VALUE);
return Integer.compare(leftScore, rightScore);
}).thenComparing(String::compareTo);
Use this comparator to sort your list. Note: you probably want to initialize your map only once and keep it in a constant or at least in a cache.
But if your special case is only a single word, as your update suggests, then this is of course overkill, and you should go with one of the other answers here.
I have a HashMap of ArrayLists as follows:
HashMap<String, ArrayList<Double>> Flkn = new HashMap<String, ArrayList<Double>>();
Flkn.put("T_"+l+"_"+k+"_"+n, new ArrayList());
l, k and n take their values based on several loops and hence their values change depending on the parameters.
Under these circumstances, I am wanting to know for a given value of k, how the minimum and maximum values of the elements can be found in their relevant ArrayLists. (Please note that the length or ArrayLists is also dependent on the parameters)
For instance, let's say that I am wanting to know the minimum and maximum values within the ArrayList for k=3. Then what I am looking for would be all the ArrayLists that have the key ("T_"+l+"_"+3+"_"+n) for every value of l and n. The problem here is that there is no way I can predict the values of l and n because they are totally dependent on the code. Another inconvenient thing is that I am wanting to get the minimum and maximum values out of the loops where l and n get their values, hence using these variables directly isn't feasible.
What would be an efficient way to get Java to call every value of l and n and fetch the values in the ArrayList in order to find the minimum and maximum of these values?
If you absolutely have to deal with such "smart keys", for any kind of processing based on its parts you first need functions to extract values of those parts:
final static Function<String, Integer> EXTRACT_K = s -> Integer.parseInt(s.replaceAll("T_\\d+_(\\d+)_\\d+", "$1"));
final static Function<String, Integer> EXTRACT_L = s -> Integer.parseInt(s.replaceAll("T_(\\d+)_\\d+_\\d+", "$1"));
final static Function<String, Integer> EXTRACT_N = s -> Integer.parseInt(s.replaceAll("T_\\d+_(\\d+)_\\d+", "$1"));
These functions when applied to a key return k, l or n, respectively (if one knows more elegant way to do such, please comment or edit).
To be as more effective as possible (iterate not over entire map, but only over its part), suggest to switch from HashMap to any implementation of SortedMap with ordering based on values stored in a smart key:
final static Comparator<String> CMP
= Comparator.comparing(EXTRACT_K)
.thenComparing(EXTRACT_L)
.thenComparing(EXTRACT_N);
SortedMap<String, List<Double>> map = new TreeMap<>(CMP);
Such you get a map where entries will be first sorted by k, then by l and finally by n. Now it is possible to get all lists mapped to a given k using:
int k = 1;
Collection<List<Double>> lists
= map.subMap(String.format("T_0_%s_0", k), String.format("T_0_%s_0", k + 1)).values();
To get max and min values around items of subMap, take the stream of its values, convert it to DoubleStream and use its .summaryStatistics() as follows:
DoubleSummaryStatistics s
= subMap.values().stream()
.flatMapToDouble(vs -> vs.stream().mapToDouble(Double::doubleValue))
.summaryStatistics();
The final part is to check whether values exist:
if (s.getCount() > 0) {
max = s.getMax();
min = s.getMin();
} else
// no values exist for a given k, thus max and min are undefined
In Java 8 you could use DoubleSummaryStatistics and do something like this:
final DoubleSummaryStatistics stats =
Flkn.entrySet().stream().filter(e -> e.getKey().matches("T_[0-9]+_" + k + "_[0-9]+"))
.flatMapToDouble(e -> e.getValue().stream().mapToDouble(Double::doubleValue))
.summaryStatistics();
System.out.println(stats.getMax());
System.out.println(stats.getMin());
filter to keep only the entries you need; flatMapToDouble to merge your lists; and summaryStatistics to get both the minimum and maximum.
I'll simplify this a bit. Suppose you have a key that depends on an Integer k and a String s. It might seem a good idea to use a
Map<String, Object>
where the keys are k + " " + s (or something similar).
This is a terrible idea because, as you have realised, you have to iterate over the entire map and use String.split in order to find entries for a particular k value. This is extremely inefficient.
One common solution is to use a Map<Integer, Map<String, Object>> instead. You can get the object associated to k = 3, s = "foo" by doing map.get(3).get("foo"). You can also get all objects associated to 3 by doing map.get(3).values().
The downside to this approach is that it is a bit cumbersome to add to the map. In Java 8 you can do
map.computeIfAbsent(3, k -> new HashMap<String, Object>()).put("foo", "bar");
Google Guava's Table interface takes the pain out of using a data structure like this.
I have a String[] dataValues as below:
ONE:9
TWO:23
THREE:14
FOUR:132
ONE:255
TWO:727
FIVE:3
THREE:196
FOUR:1843
ONE:330
TWO:336
THREE:190
FOUR:3664
I want to total the values of ONE, TWO, THREE, FOUR, FIVE.
So I created a HashMap for the same:
Map<String, Integer> totals = new HashMap<String, Integer>();
for(String dataValue : dataValues){
String[] keyVal = dataValue.split(":");
totals.put(keyVal[0], totals.get(keyVal[0]).intValue() + Integer.parseInt(keyVal[1]));
}
But above code will obviously throw below exception if the key is not already existing in the map:
Exception in thread "main" java.lang.NullPointerException
What is the best way to get the totals in my usecase above?
You can just get the value for the given key and checks if its not null:
for(String dataValue : dataValues){
String[] keyVal = dataValue.split(":");
Integer i = totals.get(keyVal[0]);
if(i == null) {
totals.put(keyVal[0], Integer.parseInt(keyVal[1]));
} else {
totals.put(keyVal[0], i + Integer.parseInt(keyVal[1]));
}
}
What is the best way to get the totals in my usecase above?
With Java 8 you can use the merge function
for(String dataValue : dataValues){
String[] keyVal = dataValue.split(":");
totals.merge(keyVal[0], Integer.parseInt(keyVal[1]), Integer::sum);
}
What this function does? Let's cite the doc:
If the specified key is not already associated with a value or is
associated with null, associates it with the given non-null value.
Otherwise, replaces the associated value with the results of the given
remapping function, or removes if the result is null
So as you get it, if there is no value associated with the key, you just map it with the int value of keyVal[1]. If there is already one, you need to provide a function to decide what you will do with both values (the one that is already mapped and the one that you want to map).
In your case you want to sum them, so this function looks like (a, b) -> a + b, which can be replaced by the method reference Integer.sum because it's a function that takes two int and returns an int, so a valid candidate (and that have the semantic you need of course).
But wait, we can actually do better! This is where the Stream API and the collectors class come handy.
Get a Stream<String> from the file, split each line into an array, group each array by its first element (the key), map its second element (the values) to integer and sum them:
import static java.util.stream.Collectors.*;
...
Map<String, Integer> map = Files.lines(Paths.get("file"))
.map(s -> s.split(":"))
.collect(groupingBy(arr -> arr[0], summingInt(arr -> Integer.parseInt(arr[1])));
and another way would be to use the toMap collector.
.collect(toMap(arr -> arr[0], arr -> Integer.parseInt(arr[1]), Integer::sum));
From the same Stream<String[]>, you collect the results in a Map<String, Integer> from which the key is arr[0], the values are the int values hold by arr[1]. If you have the same keys you merge the values by summing them.
Both give the same result, I like the first one because with the name of the collector it makes the intent clear that you are grouping elements but it's up to you to choose.
Maybe a bit difficult to understand it at first, but it's very powerful once you grab the concept of these (downstream) collectors.
Hope it helps! :)
Since Java 8 instead of map.get you can use map.getOrDefault which in case of lack of data will return default data defined by you like
totals.getOrDefault(keyVal[0], 0).intValue()
Here is an elegant (edit: pre Java 8) solution :
Integer storedVal = hashMap.get(str);
String str = keyVal[0];
int num = Integer.parseInt(keyVal[1]);
hashMap.put(str, storedVal == null ? num : storedVal + num);
Check to see that the key exists. If it does not, create it with your held int.
If the key does exist, retrieve the value and do math, storing the sum.
This works because if a key already exists, a 'put' will override the value.
I have several ArrayLists with no repeated elements. I want to find their intersection and return indices of common elements in each arraylist.
For example, if I have input as {0,1,2},{3,0,4},{5,6,0}, then I want to return {0},{1},{2} i.e. indices of common element 0 here.
One way I can think of is to use succesive retainAll() on all ArrayLists to get intersection, and then finding indices of elements of intersection using indexOf() for each input ArrayList.
Is there a better way to do that ?
Sorting the list first would require at least O(nlogn) time. If you are looking for a more efficient algorithm you could get O(n) using hashmaps.
For example with
A=[0,1,2],B=[3,0,4],C=[5,6,0]
You can loop through each list and append elements with a hash on the element. The final hash will look like
H = {0:[0,1,2], 1:[1], 2:[2], 3:[0], 4:[2], 5:[0], 6:[1]}
Here, the key is the element, and the value is the index in it's corresponding list. Now, just loop through the hashmap to find any lists that have a size of 3, in this case, to get the indices.
The code would look something like this (untested):
int[][] lists = {{0,1,2}, {3,0,4}, {5,6,0}};
// Create the hashmap
Map<Integer, List<Integer>> H = new HashMap<Integer, List<Integer>>();
for(int i = 0; i < lists.length; i++){
for(int j = 0; j < lists[0].length; j++){
// create the list if this is the first occurance
if(!H.containsKey(lists[i][j]))
H.put(lists[i][j], new ArrayList<Integer>());
// add the index to the list
H.get(lists[i][j]).add(j);
}
}
// Print out indexes for elements that are shared between all lists
for(Map.Entry<Integer, List<Integer>> e : H.entrySet()){
// check that the list of indexes matches the # of lists
if(e.getValue().size() == lists.length){
System.out.println(e.getKey() + ":" + e.getValue());
}
}
EDIT: Just noticed you suggested using retainAll() in your question. That would also be O(n).
Here is a very inefficient but fairly readable solution using streams that returns you a list of lists.
int source[][];
Arrays.stream(source)
.map(list -> IntMap.range(0, list.length)
.filter(i -> Arrays.stream(source)
.allMatch(l -> Arrays.binarySearch(l, list[i]) >= 0))
.collect(Collectors.toList()))
.collect(Collectors.toList());
You can add toArray calls to convert to arrays if required.
I'm trying to find the index position of the duplicates in an arraylist of strings. I'm having trouble figuring out a way to efficiently loop through the arraylist and report the index of the duplicate. My initial thought was to use Collections.binarySearch() to look for a duplicate, but I'm not sure how I would be able to compare the elements of the arraylist to each other with binarySearch. The only other thought I had would involve looping through the list, which is quite massive, too many times to even be feasible. I have limited java knowledge so any help is appreciated.
Not elegant, but should work:
Map<String, List<Integer>> indexList = new HashMap<String, List<Integer>>();
for (int i = 0; i < yourList.size(); i++) {
String currentString = yourList.get(i);
List<String> indexes = indexList.get(currentString);
if (indexes == null) {
indexList.put(currentString, indexes = new LinkedList<Integer>());
}
indexes.add(i);
if (indexes.size() > 1) {
// found duplicate, do what you like
}
}
// if you skip the last if in the for loop you can do this:
for (String string : indexList.keySet()) {
if (indexList.get(string).size() > 1) {
// String string has multiple occurences
// List of corresponding indexes:
List<Integer> indexes = indexList.get(string);
// do what you want
}
}
It sounds like you're out of luck.
You will have to inspect every element (i.e. iterate through the whole list). Think about it logically - if you could avoid this, it means that there's one element that you haven't inspected. But this element could be any value, and so could be a duplicate of another list element.
Binary searches are a smart way to reduce the number of elements checked when you are aware of some relationship that holds across the list - so that checking one element gives you information about the others. For instance, for a sorted list if the middle element is greater than 5, you know that every element after it is also greater than five.
However, I don't think there's a way to make such an inference when it comes to duplicate checking. You'd have to sort the list in terms of "number of elements that this duplicates" (which is begging the question), otherwise no tests you perform on element x will give you insight into whether y is a duplicate.
Now this may not be a memory efficient solution but yes I guess this is what you were looking for.. May be this program could be further improved.
import java.io.*;
import java.util.*;
class ArrayList2_CountingDuplicates
{
public static void main(String[] args)throws IOException
{
ArrayList<String> als1=new ArrayList<String>();
ArrayList<String> als2=new ArrayList<String>();
int arr[];
int n,i,j,c=0;
String s;
BufferedReader p=new BufferedReader(new InputStreamReader(System.in));
n=Integer.parseInt(p.readLine());
arr=new int[n];
for(i=0;i<n;i++)
als1.add(p.readLine());
for(i=0;i<n;i++)
{
s=als1.get(i);
als1.remove(i);
als2.add(s);
arr[c]=1;
while(als1.contains(s))
{
j=als1.indexOf(s);
als1.remove(j);
arr[c]=arr[c]+1;
}
n=n-arr[c];
c=c+1;
i=-1;
}
for(i=0;i<c;i++)
System.out.println(als2.get(i)+" has frequency "+arr[i]);
}
}
I was looking for such a method and eventually I came up with my own solution with a more functional approach to solve the problem.
public <T> Map<T, List<Integer>> findDuplicatesWithIndexes(List<T> elems) {
return IntStream.range(0, elems.size())
.boxed()
.collect(Collectors.groupingBy(elems::get))
.entrySet().stream()
.filter(e -> e.getValue().size() > 1)
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
}
It returns a map consisting of duplicated elements as the keys and list of all indexes of repeating element as the value.