Is it possible to go through a boolean array to find a false value in O(logn) running time? The array's indices run from 0 to n-1.
If it is, how would we do it in java? Pseudo code is fine.
In general, the answer is "NO": you cannot go through an array in search of a single value in less than O(N), unless you know something about the order of array items.
For example, if the array is sorted, you can find the right spot in O(log N).
For boolean array being sorted means having all falses, if any, at the beginning, and all trues, if any, at the the end. If this is the case, you can use binary search to find the "demarcation point" in logarithmic time.
No. This is not possible without any further information about the array.
The best you can do is O(n), which is traversing the array from left to right and checking each item.
If the array is sorted you would need two checks, a false value could be on either end, which is O(1).
A common proof by contradiction:
Assume that the algorithm works correctly. Then, after inspecting less than all elements, the algorithm yields a correct answer. Now assume the algorithm has seen only trues, it yields an answer x. By now only changing values the algorithm has not inspected, I can always construct a test case in which the algorithm fails. Therefore, the algorithm must inspect all elements of an (unsorted) boolean array to determine if all are true.
Whatever you do, you will have to check all the values to ensure there is no false value. So linear time, with n/2 checks in average.
Related
This is a school work. I am not looking for code help but since my teacher isn't helping I came here.
I am asked to merge and sort two sorted arrays following two cases:
When sizes of the two arrays are equal
When sizes of the two arrays are different
Now I have done case 2 which also does case 1 :/ I just don't get it how I could write a code for case 1 or how it could differ from case 2. array length doesn't connect with the problem or I am not understanding correctly.
Then I am asked to compute big(o).
I am not looking for code here. If anyone by any chance understands what my teacher is asking really please give me hints to solve it.
It is very good to learn instead of copying.
as you suggest, there is no difference between case 1 and 2 but the worst case of algorithms depend on your solution. So I describe my solution (no code) and give you its worst case.
You can in both case, the arrays must ends with infinity so add infinity to them. then iterate over all of elements of each array, at each time, pick the one which is smaller and put in in your result array (merge of tow arrays).
With this solution, you can calculate worst case easily. we must iterate both of arrays once, and we add a infinity to both of them, if their length is n and m so our worst and best case is O(m + n) (you do m + n + 2 - 1 comparison and -1 because you don't compare the end of both array, I mean infinity)
but why adding infinity add the end of array? because for that we must make a copy of array with one more space? it is one way and worst case of that is O(m + n) for copying arrays too. but there is another solution too. you can compare until you get at the end of array, then you must add the rest of array which is not compared completely to end of your result array. but with infinity, it is automatic.
I hope helped you. if there is something wrong, comment it.
Merging two sorted arrays is a linear complexity operation. This means in terms of Big-O notation it is O(m+n) where m and n are lengths of two sorted arrays.
So when you say the array length doesn't connect with the problem your understanding is correct. Irrespective of the lengths of two sorted arrays the merging of these arrays involves taking elements from each sorted array and comparing them and copying the one to new array(depending whether you want the merged sorted array in ascending or descending order) and incrementing the counter of the array from which you copied the element to new sorted array.
Another way to approach this question is to look at each array as having a head and a tail, and solving the problem recursively. This way, we can use a base case, two arrays of size 1, to sort through the entirety of the two arrays m and n. Since both arrays are already sorted, simply compare the two heads of each array and add the element that comes first to your newly-created merged array, and move to the next element in that array. Your function will call itself again after adding the element. This will keep happening until one of the two arrays is empty. Now, you can simply add what is left of the nonempty array to the end of your merged array, and you are done.
I'm not sure if your professor will allow you to use recursive calls, but this method could make the coding much easier. Runtime would still be O(m+n), as you are basically iterating through both arrays once.
Hope this helps.
I was asked this question in a recent interview.
You are given an array that has a million elements. All the elements are duplicates except one. My task is to find the unique element.
var arr = [3, 4, 3, 2, 2, 6, 7, 2, 3........]
My approach was to go through the entire array in a for loop, and then create a map with index as the number in the array and the value as the frequency of the number occurring in the array. Then loop through our map again and return the index that has value of 1.
I said my approach would take O(n) time complexity. The interviewer told me to optimize it in less than O(n) complexity. I said that we cannot, as we have to go through the entire array with a million elements.
Finally, he didn't seem satisfied and moved onto the next question.
I understand going through million elements in the array is expensive, but how could we find a unique element without doing a linear scan of the entire array?
PS: the array is not sorted.
I'm certain that you can't solve this problem without going through the whole array, at least if you don't have any additional information (like the elements being sorted and restricted to certain values), so the problem has a minimum time complexity of O(n). You can, however, reduce the memory complexity to O(1) with a XOR-based solution, if every element is in the array an even number of times, which seems to be the most common variant of the problem, if that's of any interest to you:
int unique(int[] array)
{
int unpaired = array[0];
for(int i = 1; i < array.length; i++)
unpaired = unpaired ^ array[i];
return unpaired;
}
Basically, every XORed element cancels out with the other one, so your result is the only element that didn't cancel out.
Assuming the array is un-ordered, you can't. Every value is mutually exclusive to the next so nothing can be deduced about a value from any of the other values?
If it's an ordered array of values, then that's another matter and depends entirely on the ordering used.
I agree the easiest way is to have another container and store the frequency of the values.
In fact, since the number of elements in the array was fix, you could do much better than what you have proposed.
By "creating a map with index as the number in the array and the value as the frequency of the number occurring in the array", you create a map with 2^32 positions (assuming the array had 32-bit integers), and then you have to pass though that map to find the first position whose value is one. It means that you are using a large auxiliary space and in the worst case you are doing about 10^6+2^32 operations (one million to create the map and 2^32 to find the element).
Instead of doing so, you could sort the array with some n*log(n) algorithm and then search for the element in the sorted array, because in your case, n = 10^6.
For instance, using the merge sort, you would use a much smaller auxiliary space (just an array of 10^6 integers) and would do about (10^6)*log(10^6)+10^6 operations to sort and then find the element, which is approximately 21*10^6 (many many times smaller than 10^6+2^32).
PS: sorting the array decreases the search from a quadratic to a linear cost, because with a sorted array we just have to access the adjacent positions to check if a current position is unique or not.
Your approach seems fine. It could be that he was looking for an edge-case where the array is of even size, meaning there is either no unmatched elements or there are two or more. He just went about asking it the wrong way.
Given an array with n elements, how to find the number of elements greater than or equal to a given value (x) in the given range index i to index j in O(log n) or better complexity?
my implementation is this but it is O(n)
for(a=i;a<=j;a++)
if(p[a]>=x) // p[] is array containing n elements
count++;
If you are allowed to preprocess the array, then with O(n log n) preprocessing time, we can answer any [i,j] query in O(log n) time.
Two ideas:
1) Observe that it is enough to be able to answer [0,i] and [0,j] queries.
2) Use a persistent* balanced order statistics binary tree, which maintains n versions of the tree, version i is formed from version i-1 by adding a[i] to it. To answer query([0,i], x), you query the version i tree for the number of elements > x (basically rank information). An order statistics tree lets you do that.
*: persistent data structures are an elegant functional programming concept for immutable data structures and have efficient algorithms for their construction.
If the array is sorted you can locate the first value less than X with a binary search and the number of elements greater than X is the number of items after that element. That would be O(log(n)).
If the array is not sorted there is no way of doing it in less than O(n) time since you will have to examine every element to check if it's greater than or equal to X.
Impossible in O(log N) because you have to inspect all the elements, so a O(N) method is expected.
The standard algorithm for this is based on quicksort's partition, sometimes called quick-select.
The idea is that you don't sort the array, but rather just partition the section containing x, and stop when x is your pivot element. After the procedure is completed you have all elements x and greater to the right of x. This is the same procedure as when finding the k-th largest element.
Read about a very similar problem at How to find the kth largest element in an unsorted array of length n in O(n)?.
The requirement index i to j is not a restriction that introduces any complexity to the problem.
Given your requirements where the data is not sorted in advance and constantly changing between queries, O(n) is the best complexity you can hope to achieve, since there's no way to count the number of elements greater than or equal to some value without looking at all of them.
It's fairly simple if you think about it: you cannot avoid inspecting every element of a range for any type of search if you have no idea how it's represented/ordered in advance.
You could construct a balanced binary tree, even radix sort on the fly, but you're just pushing the overhead elsewhere to the same linear or worse, linearithmic O(NLogN) complexity since such algorithms once again have you inspecting every element in the range first to sort it.
So there's actually nothing wrong with O(N) here. That is the ideal, and you're looking at either changing the whole nature of the data involved outside to allow it to be sorted efficiently in advance or micro-optimizations (ex: parallel fors to process sub-ranges with multiple threads, provided they're chunky enough) to tune it.
In your case, your requirements seem rigid so the latter seems like the best bet with the aid of a profiler.
I am aware of the maximum subarray sum problem and its O(n) algorithm. This questions modifies that problem by using a circular linked list:
Find the sequence of numbers in a circular linked list with maximum sum.
Now what if sum of all entries is zero?
To me, the only approach is to modify the array solution and have the algorithm loop around and start over at the beginning at the list once the first iteration is done. Then do the same thing for up to 2 times the entire list and find the max. The down side is that there might be many very tricky to handle if I do it this way, for example, if the list looks like:
2 - 2 - 2 - 2 back to front
Then it's very tricky to not include the same element twice
Is there a better algorithm?
Thanks!!
First of all, it doesn't matter if the datastructure is a linked list or an array, so I will use array for simplicity.
I don't really understand your algorithm, but it seems that you are going to do something like duplicate the array at the back of the original, and run the Kadane's algorithm on this doubled-array. This is a wrong approach, and a counter example has been given by #RanaldLam.
To solve it, we need to discuss it in three cases:
All negative. In this case, the maximum of the array is the answer, and an O(N) scan will do the job;
The maximum sub-array does not require a wrapping, for example a = {-1, 1, 2, -3}. In this case, a normal Kadane's algorithm will do the job, time complexity O(N);
The maximum sub-array requires a wrapping, for example a = {1, -10, 1}. Actually, this case implies another fact: since elements inside the maximum sub-array requires a wrapping, then the elements that are not inside the maximum sub-array does not require a wrapping. Therefore, as long as we know the sum of these non-contributing elements, we can calculate the correct sum of contributing elements by subtracting max_non_contributing_sum from total sum of the array.
But how to calculate max_non_contributing_sum in case 3? This is a bit tricky: since these non-contributing elements do not require wrapping, so we can simply invert the sign of every elements and run Kadane's algorithm on this inverted array, which requires O(N).
Finally, we should compare the sum of non-wrapping (case 2) and wrapping (case 3), the answer should be the bigger one.
As a summary, all cases require O(N), thus the total complexity of the algorithm is O(N).
Your absolutly right. There is no better algorithm.
So I am currently learning Java and I was asking myself, why the Insertion-Sort method doesn´t have the need to use the swap operation? As Far as I understood, elements get swapped so wouldn´t it be usefull to use the swap operation in this sorting algorithm?
As I said, I am new to this but I try to understand the background of these algorithms , why they are the way they actually are
Would be happy for some insights :)
B.
Wikipedia's article for Insertion sort states
Each iteration, insertion sort removes one element from the input
data, finds the location it belongs within the sorted list, and
inserts it there. It repeats until no input elements remain. [...] If
smaller, it finds the correct position within the sorted list, shifts
all the larger values up to make a space, and inserts into that
correct position.
You can consider this shift as an extreme swap. What actually happens is the value is stored in a placeholder and checked versus the other values. If those values are smaller, they are simply shifted, ie. replace the previous (or next) position in the list/array. The placeholder's value is then put in the position from which the element was shifted.
Insertion Sort does not perform swapping. It performs insertions by shifting elements in a sequential list to make room for the element that is being inserted.
That is why it is an O(N^2) algorithm: for each element out of N, there can be O(N) shifts.
So, you Could do insertion sort by swapping.
But, is that the best way to do it? you should think of what a swap is...
temp = a
a=b
b=temp
there are 3 assignments that take place for a single swap.
eg. [2,3,1]
If the above list is to be sorted, you could 1. swap 3 and 1 then, 2. swap 1 and 2
total 6 assignments
Now,
Instead of swapping, if you just shift 2 and 3 one place to the right ( 1 assignment each) and then put 1 in array[0], you would end up with just 3 assignments instead of the 6 you would do with swapping.