I have an array A of n integers. I also have an array B of k (k < n) integers. What I need is that any integer from array A that appears in array B to be increased by 3.
If I go with the most obvious way, I get to n*k complexity.
Array A cannot (must not) be sorted.
Is there a more efficient way of achieveing this?
Is there a more efficient way of achieveing this?
Yes: put the elements of B into a HashSet. Loop over A and, if the element you're on is contained in the set, increase it by 3. This will have O(n + k) complexity.
For instance:
Set<Integer> bSet = new HashSet<>(B.length);
for (int a : B) // O(k)
bSet.add(a);
for (int i = 0; i < A.length; i++) { // O(n)
if (bSet.contains(a[i]))
a[i] += 3;
}
If your integers are in a range that you can create and array with the length of the greatest value (for instance 0 <= A[i] and B[i] <= 65535) then you can do this
boolean [] constains = new boolean[65535];
for (int i = 0; i < k; i++){
constains[B[i]] = true;
}
for (int i = 0; i < n; i++){
if (constains[A[i]]){
A[i] += 3;
}
}
Which is O(n + k)
if array B can be sorted - then solution is obvious, sort it, then you can optimize "contains" to be log2(K), so your complexity will be N*log2(k)
if you cannot sort array B - then the only thing is straight forward N*K
UPDATE
really forgot about bitmask, if you know that you have only 32 bit integers, and have enough memory - you can store huge bitmask array, were "add" and "contains" always will be O(1), but of course it is needed only for very special performance optimizations
Related
I have a quick question about Complexity. I have this code in Java:
pairs is a HashMap that contains an Integer as a key, and it's frequency in a Collection<Integer> as a value. So :
pairs = new Hashmap<Integer number, Integer numberFrequency>()
Then I want to find the matching Pairs (a,b) that verify a + b == targetSum.
for (int i = 0; i < pairs.getCapacity(); i++) { // Complexity : O(n)
if (pairs.containsKey(targetSum - i) && targetSum - i == i) {
for (int j = 1; j < pairs.get(targetSum - i); j++) {
collection.add(new MatchingPair(targetSum - i, i));
}
}
}
I know that the complexity of the first For loop is O(n), but the second for Loop it only loops a small amount of times, which is the frequency of the number-1, do we still count it as O(n) so this whole portion of code will be O(n^2) ? If it is does someone have any alternative to just make it O(n) ?
Its O(n) if 'pairs.getCapacity()' or 'pairs.get(targetSum - i)' is a constant you know before hand. Else, two loops, one nested in the other, is generally O(n^2).
You can consider that for the wors case your complexity is O(n2)
Given an array, I want to figure out which element x has the most numbers to its left that are greater than x. For example, in the array [3, 3, 1, 8, 2, 9], the element 2 has 3 numbers to its left that are greater than itself.
The answer to this question should be the amount of bigger numbers to the left of the value. Here's my obvious brute force solution:
int biggest = 0;
for (int i = 0; i < n; i++) {
int num = 0;
for (int j = 0; j < i; j++)
if (a[j] > a[i])
num++;
biggest = Math.max(biggest, num);
}
However, this runs in O(n^2) time which is undesirable. How can I solve this task in a quicker way?
Something like this would work, if there was an efficient implementation of size for a tailSet of a TreeSet. But AFAIK it's O(n) rather than O(log(n)) as it could be.
int biggest = 0;
TreeSet<Integer> set = new TreeSet<>();
for (int x : a) {
int num = set.tailSet(x).size();
biggest = Math.max(biggest, num);
set.add(x);
}
So this is just the idea. It would work if you implemented your own TreeSet, where each node would remember the size of its right child. An insertion would be still O(log(n)), the size computation would be also O(log(n)) and the whole loop then O(n * log(n)).
This is surely doable, just quite some work.
The problem I am facing is this one:
I have an array of doubles from which I want to keep the top k greater values.
I have seen some implementations involving Arrays.sort. For example in this example with relative issue it is suggested to use this approach.
Since I am only interested in the first k elements I have also experimented with MinMaxPriorityQueue. I have created a MinMaxPriorityQueue with a maximumSize:
Of course there is again autoboxing.
Builder<Comparable> builder = MinMaxPriorityQueue.maximumSize(maximumSize);
MinMaxPriorityQueue<Double> top2 = builder.create();
The problem is that the order is the ascending one that it's the opposite of the one I want. So I cannot use it this way.
To state the problem's real parameters my arrays is about 50 elements long and I am interested in up to the top k = 5 elements.
So is there any way to bypass this problem using the second approach? Should I stay with the first one even though I don't really need all elements sorted? Do you know if there is any significant difference in speed performance (I will have to use this in a lot of situations so that's where the speed is needed)? Is there any other solution I could use?
As for the performance, I know I can theoretically check it myself but I am a bit out of time and if someone have any solution I am happy to hear it (or read it anyway).
If you only have like 50 elements, as noted in my comment, just sort it and take the last k elements. It's 2 lines only:
public static double[] largests(double[] arr, int k) {
Arrays.sort(arr);
return Arrays.copyOfRange(arr, arr.length - k, arr.length);
}
This modifies (sorts) the original array. If you want your original array unmodified, you only need +1 line:
public static double[] largests2(double[] arr, int k) {
arr = Arrays.copyOf(arr, arr.length);
Arrays.sort(arr);
return Arrays.copyOfRange(arr, arr.length - k, arr.length);
}
You can use System.arraycopy on a sorted array:
double[] getMaxElements(double[] input, int k) {
double[] temp = Arrays.copyOf(input, input.length);
Arrays.sort(temp); // Sort a copy to keep input as it is since Arrays.sort works in-place.
return Arrays.copyOfRange(temp, temp.length - k, temp.length); // Fetch largest elements
}
For 50 elements, it is much faster to sort an array than to mess with generics and comparables.
I will write up an additional "fast" algorithm...
double[] getMaxElements2(double[] input, int k) {
double[] res = new double[k];
for (int i = 0; i < k; i++) res[i] = Double.NEGATIVE_INFINITY; // Make them as small as possible.
for (int j = 0; j < input.length; j++) // Look at every element
if (res[0] < input[j]) { // Keep the current element
res[0] = input[j];
Arrays.sort(res); // Keep the lowest kept element at res[0]
}
return res;
}
This is O(N*k*log(k)) while the first one is O(N*log(N)).
I have an array int[] a= {5,3,1,2} and I want to make a method that picks out the "k" smallest numbers and return an array with the k smallest integers in ascending order. But when I run this code I get the output: [1,3].
I know the code skips some numbers somehow, but I cant twist my brain to fix it.
Any ideas?
EDIT: Without sorting the original array.
public static int[] nrSmallest(int[] a, int k) {
if(k <1 || k>a.length)
throw new IllegalArgumentException("must be at least 1");
int[] values= Arrays.copyOf(a, k);
Arrays.sort(values);
int counter= 0;
for(int i= k; i < a.length; i++) {
if(a[i]< values[counter]) {
for(int j= k-1; j> counter; j--) {
values[j]= values[j-1];
}
values[counter]= a[i];
}
if(counter< k) counter++;
}
return values;
}
EDIT: Joop Eggen solved this for me. Scroll down to see answer. Thanks!
As already pointed out in the comments, simply return a part of the sorted array.
public static int[] nrSmallest(int[] a, int k) {
// check parameters..
// copy all so we don't sort a
int[] sorted = Arrays.copyOf(a, a.length);
Arrays.sort(sorted);
return Arrays.copyOf(sorted, Math.min(k, sorted.length));
}
If you can't modify the original array, this is typically done with some type of priority queue, often a binary heap.
The method that you use in your example is O(n^2), and uses O(k) extra space. Sorting the original array and selecting the top k items is O(n log n). If you copy the array and then sort it, it uses O(n) extra space.
Using a heap is O(n log k), and requires O(k) extra space.
There is an O(n) solution that involves manipulating the original array (or making a copy of the array and manipulating it). See Quickselect.
My own testing shows that Quickselect is faster in the general case, but Heap select is faster when the number of items to be selected (k) is less than 1% of the total items (n). See my blog post, When theory meets practice. That comes in quite handy when selecting, say, the top 100 items from a list of two million.
(Corrected) To keep your code:
for (int i= k; i < a.length; i++) {
if (a[i] < values[counter]) { // Found small value
// Insert sorted
for (int j = k-1; j >= 0; j--) {
if (j == 0 || a[i] > values[j-1]) { // Insert pos
// Move greater ones up.
for (int m = k - 1; m > j; m--) {
values[m] = values[m - 1];
}
values[j] = a[i]; // Store
break; // Done
}
}
}
}
int[] values= Arrays.copyOf(a, k); this line is wrong. you are copying only k elements. but you are suppose to copy all elements and then sort the array.
First sort the array and then return the sorted part of the array upto k.
public static int[] nrSmallest(int[] a, int k) {
if(k <1 || k>a.length)
throw new IllegalArgumentException("must be at least 1");
Arrays.sort(a);
return Arrays.copyOf(a,k);
}
You could use the "pivoting" idea of quicksort,
The pivot denotes the "rank" of that number in the array, so your end goal would be having a pivot at index "k", which will result in a subarray less than the Kth element, in other words first K smallest numbers (not exactly sorted).
I asked this question before, but my post was cluttered with a whole bunch of other code and wasn't clearly presented, so I'm going to try again. Sorry, I'm new here
Shell sort, how I wrote it, only works sometimes. Array a is an array of 100 integers unsorted, inc is an array of 4 integers whose values are the intervals that shell sort should use (they descend and the final value is always 1), count is an array which stores the counts for different runs of shell sort, cnt represents the count value which should be updated for this run of shell sort.
When I run shell sort multiple times, with different sets of 4 intervals, only sometimes does the sort fully work. Half the time the array is fully sorted, the other half of the time the array is partially sorted.
Can anyone help? Thanks in advance!
public static void shellSort(int[] a, int[] inc, int[] count, int cnt) {
for (int k = 0; k < inc.length; k++) {
for (int i = inc[k], j; i < a.length; i += inc[k]) {
int tmp = a[i];
count[cnt] += 1;
for (j = i - inc[k]; j >= 0; j -= inc[k]) {
if (a[j] <= tmp)
break;
a[j + inc[k]] = a[j];
count[cnt] += 1;
}
a[j + inc[k]] = tmp;
count[cnt] += 1;
}
}
}
One problem is that you're only sorting one inc[k]-step sequence for each k, while you should sort them all (you're only sorting {a[0], a[s], a[2*s], ... , a[m*s]}, leaving out {a[1], a[s+1], ... , a[m*s+1]} etc.). However, that should only influence performance (number of operations), not the outcome, since the last pass is a classical insertion sort (inc[inc.length-1] == 1), so that should sort the array no matter what happened before.
I don't see anything in the code that would cause failure. Maybe the inc array doesn't contain what it should? If you print out inc[k] in each iteration of the outer loop, do you get the expected output?
There is an error in your i loop control:
for (int i = inc[k], j; i < a.length; i += inc[k]) {
Should be:
for (int i = inc[k], j; i < a.length; i++) {
The inner j loop handles the comparison of elements that are inc[k] apart. The outer i loop should simply increment by 1, the same as the outer loop of a standard Insertion sort.
In fact, the final pass of Shellsort with an increment of 1 is identical to a standard Insertion sort.