I was trying this problem where we need to find the permutations of the elements in the array.
This is the leetcode problem no 46. The issue I've faced is that I'm not able to output the ans it just keeps returning a blank ArrayLists:
Code:
public List<List<Integer>> permute(int[] nums)
{
List<List<Integer>> fans = new ArrayList<>();
HashMap<Integer, Integer> fmap = new HashMap<>();
for(int i: nums){
fmap.put(i, fmap.getOrDefault(i, 0) + 1);
}
int n=nums.length;
List<Integer> ans=new ArrayList<>(n);
dfs(1, n, fmap, ans, fans);
return fans;
}
public void dfs(int cs, int ts, HashMap<Integer, Integer> fmap,List<Integer> ans, List<List<Integer>> fans)
{
if (cs > ts)
{
fans.add(ans);
return;
}
for(Integer val: fmap.keySet())
{
if (fmap.get(val) > 0)
{
fmap.put(val, fmap.get(val) - 1);
ans.add(val);
dfs(cs + 1, ts, fmap, ans, fans);
ans.remove(ans.size() - 1);
fmap.put(val, fmap.get(val) + 1);
}
}
}
Output for the test case [0,1]:
[[],[]]
The actual output should be:
[[0,1],[1,0]]
When I'm checking the "potential answer" inside the recursive method, I am able to see the correct answer. I mean, if I print the output in the dfs method, it shows the correct answer:
Change in the code:
if (cs > ts)
{
fans.add(ans);
System.out.println(fans);
return;
}
Now it's printing the value of fans:
[[0, 1]]
[[1, 0], [1, 0]]
But these values are not being updated in the fans and the returned value comes up blank.
I read someone mention this same issue, but it was for Python, and the solution in that case was to do a deep copy of the list.
I'm not sure how to do that in Java.
What I'm doing wrong?
In order to generate a list of permutations, you don't need a Map. You've only introduced redundant actions, which are not useful anyhow. If you doubt, add a couple of print-statements to visualize the map state it will always contain the same key with the value 1 (all numbers in the input are guaranteed to be unique) and it has no impact on the result.
Source of data for generating the Permutations
Besides the fact that attempt to utilize the HashMap as the source of data for generating permutations isn't working because of the bugs, it's also not a good idea because the order iteration over the keySet of HashMap is not guaranteed to be consistent.
As the uniform mean for storing the numbers that haven't yet been applied in current permutation, we can use an ArrayList. In this case because there will be no duplicates in the input (see the quote from the leetcode below), we can use a LinkedHashSet instead to improve performance. As explained below, a removal of elements will happen at before making every recursive a call, and removal from an ArrayList has a cost of O(n), meanwhile with LinkedHashSet it would be reduced to O(1).
Constraints:
1 <= nums.length <= 6
-10 <= nums[i] <= 10
All the integers of nums are unique.
Generating the Permutations
Each generated permutation should be contained in its own list. In your code, you've created one single list which is being passed around during recursive calls and eventually every recursive branch adds the same list to the resulting list. Which obviously should not happen.
You see, the result is being printed as [[],[]]. It seems like a list containing two lists, but in fact they refer to the same empty list.
And this list is empty because every element that was added to it, is being removed after performing a recursive call:
ans.add(val);
... <- rursive call in between
ans.remove(ans.size() - 1); // removes the last element
if I print the output in the dfs method, it shows the correct answer:
Actually, it's not correct. If you take a careful look at the results, you'll see the nested lists are the same [[1, 0], [1, 0]].
The final result is blank because all recursive calls are happening between each value being added and removed (see the code snippet above). I.e. removal will be performed in revered order. That would be the last lines to be executed, not the return statements. To understand it better, I suggest you to walk through the code line by line and draw on paper all the changes done to the ans list for a simple input like [0, 1].
Instead, you should create a copy of the list containing not fully generated permutation (answer) and then add an element into the copy. So that the initial permutation (answer) remains unaffected and can be used as a template in all subsequent iterations.
List<Integer> updatedAnswer = new ArrayList<>(answer);
updatedAnswer.add(next);
And you also need to create a copy of the source of data and remove the element added to the newly created permutation (answer) in order to avoid repeating this element:
Set<Integer> updatedSource = new LinkedHashSet<>(source);
updatedSource.remove(next);
Sidenote: it's a good practice to give meaningful names to methods and variables. For instance, names cs and ts aren't informative (it's not clear what they are meant to store without looking at the code), method-name dfs is confusing, DFS is a well-known algorithm, which is used for traversing tree or graph data structures, but it's not related to this problem.
Building a recursive solution
It makes sense to keep the recursive method to be void to avoid wrapping the result with an additional list that would be thrown away afterwards, but in general it's more handy to return the result rather than accumulating it in the parameter. For performance reasons, I'll keep the method to be void.
Every recursive implementation should contain two parts:
Base case - that represents a simple edge-case (or a set of edge-cases) for which the outcome is known in advance. For this problem the base case would represent a situation when the given permutation has reached the size of the initial array, i.e. the source will contain no element, and we need to check whether it's empty or not. Parameters cs and ts that were used for this check in the solution provided in the question are redundant.
Recursive case - a part of a solution where recursive calls a made and when the main logic resides. In the recursive case, we need to replicate the given answer and source as explained above and use the updated copies as the arguments for each recursive call.
That's how it might be implemented:
public static List<List<Integer>> permute(int[] nums) {
Set<Integer> source = new LinkedHashSet<>();
for (int next: nums) source.add(next);
List<List<Integer>> result = new ArrayList<>();
permute(source, new ArrayList<>(), result);
return result;
}
public static void permute(Set<Integer> source, List<Integer> answer,
List<List<Integer>> result) {
if (source.isEmpty()) {
result.add(answer);
return;
}
for (Integer next: source) {
List<Integer> updatedAnswer = new ArrayList<>(answer);
updatedAnswer.add(next);
Set<Integer> updatedSource = new LinkedHashSet<>(source);
updatedSource.remove(next);
permute(updatedSource, updatedAnswer, result);
}
}
main()
public static void main(String[] args) {
int[] source = {1, 2, 3};
List<List<Integer>> permutations = permute(source);
for (List<Integer> permutation: permutations) {
System.out.println(permutation);
}
}
Output:
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
A link to Online Demo
Related
I am trying to program a method that deletes the first, second and third element of every group of 4 elements.
It seems not working at all.
Could anyone please help?
public static void reduziereKommentare(List<String> zeilen) {
if (!zeilen.isEmpty()) {
if (zeilen.size() % 4 != 0) {
throw new RuntimeException("Illegal size " + zeilen.size() + " of list, must be divisible by 4.");
}
for (int i = 1; i <= zeilen.size() % 4; i++) {
zeilen.remove(i);
zeilen.remove(i + 1);
zeilen.remove(i + 2);
}
}
System.out.println(zeilen);
}
As said in the comments, removing an element impacts the indexing. Whenever I need to do something like this, I either use an Iterator, or loop backwards.:
for (int i = zeilen.size() - 4; i >= 0; i -= 4) {
zeilen.remove(i + 2);
zeilen.remove(i + 1);
zeilen.remove(i);
}
Note that I subtract 4 from i each iteration, so I go back a full block of four each time.
Also note that I remove the largest indexed elements first. If I use i, i + 1 and i + 2 inside the loop, I again run into the same issue. I could also have used i 3 times, but this makes it more clear.
My take...does not require the size precondition check but you may want to still catch that if it represents an error of broader scope than this method.
Given this test code...
// Test code
List<String> myList = new ArrayList<>();
for (int i = 0; i < 20; i++) {
myList.add(String.valueOf(i));
}
the 'zeilen' loop can be implemented as ...
// "before" diagnostics
System.out.println(zeilen);
// The 'zeilen' loop
for (int i = 0, limit = zeilen.size(); i < limit; i++) {
if ((i+1) % 4 > 0) zeilen.remove(i/4);
}
// "after" diagnostics
System.out.println(zeilen);
and produces
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[3, 7, 11, 15, 19]
Works with any length list leaving every '4th' element in list.
A few more test cases :
Given Results in
[] []
[0,1] []
[0,1,2,3] [3]
[0,1,2,3,4] [3]
[0,1,2,3,4,5,6,7] [3,7]
[0,1,2,3,4,5,6,7,8] [3,7]
Would it not be easier to simply add every fourth item to a new list and return that? This would also eliminate any repetitive copying that could be involved when removing elements from a list. And the target list can be appropriately sized to start.
public static List<String> reduziereKommentare(List<String> zeilen) {
Objects.requireNonNull(zeilen);
List<String> zeilen1= new ArrayList<>(zeilen.size()/4);
for(int i = 3; i < zeilen.size(); i+=4) {
zeilen1.add(zeilen.get(i));
}
return zeilen1;
}
You could also use a stream.
zeilen = IntStream.iterate(3, i ->i < zeilen.size(), i->i+=4)
.mapToObj(zeilen::get).toList();
Notes:
whether the list is empty or the size is not divisible by 4, this will work. It will just ignore the extra elements.
assigning the result to the original variable will result in the old list being garbage collected.
I only check for a null argument since that would cause an exception. Of course, if alerting the user of the size is important just add the other check(s) back in.
Your code sample uses a data type of List - List<String> zeilen - but you separately wrote a comment which states that you're starting from an array:
"I used the Arrays.asList() function to add elements to the list"
The signature for asList() shows the input argument is an array, defined using varargs:
public static <T> List<T> asList(T... a)
Thus, you would start from something like this:
// rely on automatic array creation via varargs
List<String> list = Arrays.asList("one", "two", "three");
or from an explicit array, like this:
String[] strings = {"one", "two", "three"};
List<String> list = Arrays.asList(strings);
Here's a more complete picture of your current solution:
start with an array – String[] – creating it explicitly or relying on automatic array creation via varargs
create a List<String> from that array using Arrays.asList()
traverse the List skipping three items at a time, keeping only each fourth item (so: 4th, 8th, 12th, 16th, etc.)
Since the starting point is a String array, and knowing that you're
interested in keeping only every 4th element,
you could:
create a new, empty java.util.List<String>
iterate over each element of the array
for every 4th, 8th, etc element, add that to the final result list; ignore everything else
Here's the code to do that:
private static List<String> buildListOfEveryFourthElement(String[] array) {
List<String> everyFourthElement = new ArrayList<>();
if (array != null) {
// start from "1", a bit easier to reason about "every 4th element"?
int current = 1;
for (String s : array) {
if (current > 1 && current % 4 == 0) {
everyFourthElement.add(s);
}
current++;
}
}
return everyFourthElement;
}
I omitted the check for whether the input is exactly divisible by 4, but you could easily edit the first if statement
to include that: if (array != null && array.length % 4 == 0) { .. }
A benefit to this "build the List as you go" approach (vs. calling Arrays.asList() with a starting array)
is that the original input array would not be associated in any way with the result list.
So what? As you mentioned in one of your comments that you discovered it's not permissible
to modify the list – calling .remove() will throw java.lang.UnsupportedOperationException.
Note this will also happen if you try to add() something to the list.
Why does it throw an exception?
Because asList() returns a java.util.List which is backed by the input array, meaning the list and array are
sort of tied together. If it allowed you to remove (or add) items from (or to) the
list then it would also have to automatically update the backing array, and they didn't implement it that way.
Here's a brief snip from asList() Javadoc:
Returns a fixed-size list backed by the specified array. (Changes to the returned list "write through" to the array.)
By creating a new List and populating it along the way, you are free to modify that list later in your code
by removing or adding elements, sorting the whole thing, etc. You would also be guarded against any changes to the array
showing up as (possibly surprising) changes in the list – because
the list is backed by the array, a change in an array element would be visible in the associated list.
This question already has answers here:
How is this HashSet producing sorted output?
(5 answers)
Closed 1 year ago.
PS: How is this HashSet producing sorted output?
this post doesn't answer my question. I know that if I put any numbers into hashset, I will not get sorted order.
However, I found that if I put all [1, 2, 3, ..., n] into a HashSet with any shuffled order and iterate the HashSet, I will get a guranteed sorted order. I cannot understand why it will always happen. I've tested any n < 10000 for many times, it's always true, therefore it should not be a coincidence and it should have some reason! Even though I should not rely on this implement details, please tell me why it always happens.
PS: I know that if I insert [0,1,2, ..., n-1], or [1+k, 2+k, .., n+k] (k != 0) into HashSet, the iteration order is unsorted and I've tested. It's normal that iteration order of HashSet is unsorted. However, why any insertion order of [1,2,3,4,..,n] is accidentally always true? I've checked the implementation details. If I track the path, the whole process will inculde the resizing the bucket array, and transformation from linkedlist to red-black tree. If I insert the whole [1-n] in shuffled order, the intermediate status of the HashSet is unsorted. However it
will accidentally have sorted order, if I complete the all insertions.
I used the JDK 1.8 to do the following test.
public class Test {
public static void main(String[] args) throws IOException {
List<Integer> res = printUnsortedCase(10000);
System.out.println(res);
}
private static List<Integer> printUnsortedCase(int n){
List<Integer> res = new ArrayList<>();
for (int i = 2; i < n; i++) {
if (!checkSize(i)) {
res.add(i);
}
}
return res;
}
private static boolean checkSize(int n) {
List<Integer> list = new ArrayList<>();
for (int i = 0; i < n; i++) {
list.add(i);
}
// here I've shuffled the list of [1,2,3,4, ...n]
Collections.shuffle(list);
Set<Integer> set = new HashSet<>();
for (int i = 0; i < n; i++) {
set.add(list.get(i)); // I insert the set in an unsorted order of [1,2,3,..,n]
}
list = new ArrayList<>(set);// iterate over the HashSet and insert into ArrayList
return isSorted(list);
}
private static boolean isSorted(List<Integer> list) {
for (int i = 1; i < list.size(); i++) {
if (list.get(i - 1) > list.get(i)) return false;
}
return true;
}
}
I've wrote the above checking code and it seems true.
You are conflating two related concepts:
guaranteed order: the specification says that you will get the elements back in a specific order and all implementations conforming to that spec will do so.
reproducible order: a specific implementation returns all the elements back in a specific order.
Guaranteed order necessarily implies reproducible order (otherwise you'd have a bug).
Reproducible order doesn't imply guaranteed order. It's possible that the reproducible order is just a side effect of some implementation details that happens to align so that you get the elements in the same order under some circumstances, but this isn't guaranteed.
In this specific case several factors together result in a reproducible order:
Integer has a highly reproducible and predictable hashCode (it's just the number itself)
HashMap does some minor manipulation on that hash code to decrease the chances of collisions by simple hash code implementations, which doesn't matter in this case (because it just does hash ^ (hash >>> 16) which keeps number <= 216 equally-sorted).
You use a very consistent and reproducible way to construct your HashMaps. The resulting hashmaps will always have gone through the same growing stages.
If instead of
list.add(i);
you did
list.add(i + 65000);
(i.e. use the number 65000 to 65000+n instead of 0 to n) then you'd see the non-sorted results emerge.
In fact the "reproducible order" that you get is so fragile that just adding 10 already causes some of the lists to be unsorted.
I am trying to add the squared elements for back into the original arraylist. For example [1,2,3] should become [1, 1, 2, 4, 3, 9]. My issue is I am not sure if my machine is just bad because I am getting an out of memory error. Here is my attempt. The recursive call is just to get the sum of the arraylist.
public static int sumOfSquares(List<Integer> num) {
if (num.isEmpty()) {
return 0;
}
for(int i=0; i<num.size();i++){
int hold= num.get(i)*num.get(i);
num.add(hold);
}
return num.get(0) + sumOfSquares(num.subList(1, num.size()));
}
The problem with your implementation is that it does not distinguish original numbers from the squares that you have previously added.
First, since you are doing this recursively, you don't need a for loop. Each invocation needs to take care of the initial value of the list alone.
Next, add(n) adds the number at the end, while your example shows adding numbers immediately after the original value. Therefore, you should use num.add(1, hold), and skip two initial numbers when making a recursive call.
Here is how the fixed method should look:
public static int sumOfSquares(List<Integer> num) {
if (num.isEmpty()) {
return 0;
}
// Deal with only the initial element
int hold= num.get(0)*num.get(0);
// Insert at position 1, right after the squared number
num.add(1, hold);
// Truncate two initial numbers, the value and its square:
return num.get(1) + sumOfSquares(num.subList(2, num.size()));
}
Demo.
There are two ways to safely add (or remove) elements to a list while iterating it:
Iterate backwards over the list, so that the indexes of the upcoming elements don't shift.
Use an Iterator or ListIterator.
You can fix your code using either strategy, but I recommend a ListIterator for readable code.
import java.util.ListIterator;
public static void insertSquares(List<Integer> num) {
ListIterator<Integer> iter = num.listIterator();
while (iter.hasNext()) {
int value = iter.next();
iter.add(value * value);
}
}
Then, move the summing code into a separate method so that the recursion doesn't interfere with the inserting of squares into the list. Your recursive solution will work, but an iterative solution would be more efficient for Java.
I have a map TreeMap<Integer, Set<Integer>> adjacencyLists and an integer set TreeSet<Integer> specialNodes.
The map represents adjacency lists of a graph.
I want to pick keys from adjacencyLists and find if there is a common adjacent of them in specialNodes.
Is there a way to do this efficiently?
Example:
adjacencyLists is as follows:
[1, [2 3 4 5]]
[2, [1 5]]
[3, [1 4 5]]
[4, [1 3]]
[5, [1 2 3]]
and specalNodes is as follows:
[1 3 4 5]
In this example, 4 and 5 are present in the values of first and third entries of adjacencyLists.
Hence, writing a function findCommon(1,3) should give me [4 5]
Similarly, findCommon(1,5) should return null because '2' is the only element that is common and is not in specialNodes.
Here's a step by step procedure.
Get the two values from the keys. O(logn).
Sort them. O(nlogn).
Find the common elements. O(m+n).
Search for the common elements in specialNodes. O(m+n).
Hence worst case time complexity = O(nlogn).
So if I'm understanding correctly you already have a set for each Integer listing its adjacency?
The easiest efficient way I can see to do this is to make use of another Set. Sets are very fast for checking if they already contain values.
Set<Integer> adjacent = new HashSet<>();
for (Integer i: toCheck) {
int oldCount = adjacent.size();
Set<Integer> check = adjacencyLists.get(i);
adjacent.addAll(check);
if (adjacent.size() != oldCount+check.size()) {
// Duplicate found
return true;
}
}
return false;
If you need to know the identify of the common then loop through doing individual add calls instead of doing the addAll and check each add for success. This may actually be more efficient as no need to do the size checks:
Set<Integer> adjacent = new HashSet<>();
for (Integer i: toCheck) {
Set<Integer> check = adjacencyLists.get(i);
for (Integer c: check)
if (!adjacent.add(c)) {
// Duplicate found
return c;
}
}
return null;
Just saw the request for the full list of common members:
Set<Integer> adjacent = new HashSet<>();
Set<Integer> results = new HashSet<>();
for (Integer i: toCheck) {
Set<Integer> check = adjacencyLists.get(i);
for (Integer c: check)
if (!adjacent.add(c)) {
// Duplicate found
results.add(c);
}
}
return results;
Not 100% sure what you mean, but this is my idea. One possible way is to use a search algorithm like BFS. Since all those four nodes must have one common node, means if you use one of your four nodes as root and search for each other of the three nodes. If the search for all three is a successful they must have one common node.
The obvious solution would be to make a copy of specialNodes, then call its method retainAll on all the sets that you are considering, and the resulting set contains the common nodes. Did you try it ? Is it not efficient enough ?
Code:
Set findCommons(int a, int b) {
HashSet commonNodes = new HashSet(specialNodes);
commonNodes.retainAll(adjacencyLists.get(a));
commonNodes.retainAll(adjacencyLists.get(b));
return commonNodes;
}
Let's say I have a list (EG: LinkedList<SomeObject>that contains elements ordered by a certain attribute (EG: SomeObject.someValue()). This attribute can and usually does repeat often/it isn't unique, BUT is never null.
Is there a convenient way to divide this into multiple Lists, each list containing only its equal in cardinal order? Also, can this be done with only once iteration of the list? For example, the original list:
1, 1, 1, 2, 2, 3, 3, 3
The desired lists from this:
1, 1, 1
2, 2,
3, 3, 3
Not too convenient, but:
start a loop. Store the previous item, and compare it to the current.
if the previous is different from the current (using equals(..), and be careful with null), then create a new List, or use list.subList(groupStart, currentIdx)
You could use Apache CollectionUtils to do this, where "list" is the original list, and "value" is the current value of the objects you want to extract a sublist for:
Collection<SomeObject> selectedObjects = CollectionUtils
.select(list,
new Predicate() {
boolean evaluate(Object input) {
return ((SomeObject) input).someValue().equals(value);
}
});
This approach means using a well known and well tested library (which always is a good thing), but the downside is that you will loop through the list once for each sublist you need.
Pretty sure there isn't a java API method for this. However you can write:
// This assumes your list is sorted according to someValue()
// SomeValueType is the type of SomeObject.someValue()
public Map<SomeValueType, List<SomeObject>> partition(List<SomeObject> list) {
Object currValue = null;
HashMap<SomeValueType, LinkedList<SomeObject>> result = new HashMap<SomeValueType, LinkedList<SomeObject>>();
LinkedList<SomeObject> currList = null;
for (SomeObject obj : list) {
if (!obj.someValue().equals(currValue()) {
currValue = obj.someValue();
currList = new LinkedList<SomeObject>();
result.put(currValue, currList);
}
currList.add(obj);
}
}
This will return you an HashMap of sublists, where the key is the someValue and the value is the partitioned list associated to it. Note, I didn't test this, so don't just copy the code.
EDIT: made this return hashmap instead of arraylist.
If you would use Google Guava-libaries:
import com.google.common.collect.HashMultiset;
import com.google.common.collect.Lists;
public class Example {
public static void main(String[] args) {
HashMultiset<Integer> ints = HashMultiset.create();
ints.addAll(Lists.newArrayList(1, 1, 1, 2, 2, 3, 3, 3));
System.out.println(ints);
}
}
Output:
[1 x 3, 2 x 2, 3 x 3]
If you need to count how many elements of x you have use ints.count(x);, if you have value types you do not need to have more then just count.
With Guava, use Multimaps.index(Iterable<V>, Function<? super V, K>).
This should work (untested, but I am pretty sure everything is ok, This also assumes that the contents of the list are sortable):
public static List[] getEquivalentSubLists( List parent )
{
List cloneList = parent.clone();
Collections.sort(cloneList);
ArrayList<List> returnLists;
int end;
while (cloneList.size() > 0)
{
end = cloneList.lastIndexOf(cloneList.get(0));
returnLists.add(cloneList.subList(0, end));
cloneList.removeAll(cloneList.subList(0, end));
}
return returnList.toArray();
}