I need to get a number of all possible ways to divide array into small sub-arrays. We can divide array verticaly and horizontaly. My algorithm works very good, but time complexity is too bad. Can you have a look how to improve it?
Parameters
nStart - first row of sub-array
nEnd - last row of sub-array
mStart, mEnd - are for second dimension (columns).
check() - functions checking end condition
return - numbers of different ways to divide array. We divide while function check return true.
public static long divide(int nStart, int nEnd, int mStart, int mEnd) {
long result = 0;
for(int i = 1; i < nEnd - nStart; i++) {
if(check(nStart, nStart + i, mStart, mEnd) && check(nStart + i, nEnd, mStart, mEnd))
result += divide(nStart, nStart + i, mStart, mEnd) * divide(nStart + i, nEnd, mStart, mEnd);
}
for(int i = 1; i < mEnd - mStart; i++) {
if(check(nStart, nEnd, mStart, mStart + i) && check(nStart, nEnd, mStart + i, mEnd))
result += divide(nStart, nEnd, mStart, mStart + i) * divide(nStart, nEnd, mStart + i, mEnd);
}
return (result == 0 ? 1 : result) % 1000000000;
}
Example
Input
2 2
10
01
Output 2
Input
3 2
101
010
Output 5
I think you need to know how check() function works. We stop dividing when next subarray have only ones or only zeros. Here is code:
public static boolean check(int nStart, int nEnd, int mStart, int mEnd) {
if((nEnd - nStart) + (mEnd - mStart) == 2)
return false;
for(int i = mStart; i < mEnd; i++) {
for(int j = nStart; j < nEnd; j++) {
if(bar[i][j] != bar[mStart][nStart])
return true;
}
}
return false;
}
By looking at your code I can see that in each step of the recursion you divide your two-dimensional array into two arrays with a single horizontal or vertical cut. Then you verify that both of these parts fulfil some condition of yours defined by the check-method and, if so, then you put these two parts into a recursion. When the recursion can no longer be continued, you return 1. Below I assume that your algorithm always produces the result you want.
I'm afraid that an effective optimization of this algorithm is highly dependent on what the check-condition does. In the trivial case it would always retuns true, when the problem collapsed into a straightforward mathematical problem that propably has a general non-recursive solution. A bit more complex, but still effectively solvable would be a scenario where the condition would only check the shape of the array, meaning that e.g. check(1,5,1,4) would return the same result as check(3,7,5,8).
The most complex is of course a general solution, where the check-condition can be anything. In this case there is not much that can be done to optimize your brute force solution, but one thing that comes to my mind is adding a memory to you algorithm. You could use the java.awt.Rectangle class (or create your own class) that would hold the dimensions of a sub-array and then have a java.util.HashMap to store the results of the executions of the divide-method for furure reference, if the method is called again with the same parameters. This would prevent duplicate work that will propaply occur.
So you define the haspmap as a static variable in you class:
static HashMap<Rectangle,Long> map = new HashMap<Rectangle,Long>();
then in the beginning of the divide-method you add the following code:
Rectangle r = new Rectangle(nStart,mStart,nEnd,mEnd);
Long storedRes = map.get(r);
if (storedRes != null) {
return storedRes;
}
and then you change the ending of the method into form:
result = (result == 0 ? 1 : result) % 1000000000;
map.put(r, result);
return result;
This should give a performance-boost for your algorithm.
To return to my earlier tought, if the check-condition is simple enough, this same optimization can be done even more effectively. For example, if your check-condition only checks the shape of the array, you will only need to have its width and height as a key to the map, which will decrease the size of the map and multiple the number of positive hits in it.
Related
I am trying to do the following exercise (found on Codility):
The way I have approached it is by using pointers. E.g. the binary representation of 25 is 11001. We start off with i = 0, j = 1, and a variable gLength = 0 that keeps track of the length of the gap.
If the i'th index is 1, check for the j'th index. If the j'th index is 0, increment gLength. If the j'th index is 1, check if gLength is greater than 0. If it is, then we need to store this length in an ArrayList as we have reached the end of the gap. Increment i and j, and repeat.
Here's the method in code:
public static int solution(int N) {
String binaryStr = Integer.toBinaryString(N);
// pointers
int i = 0;
int j = 1;
// length of gap
int gLength = 0;
while (j < binaryStr.length() && i < j) {
if (binaryStr.charAt(i) == 1) {
if (binaryStr.charAt(j) == 0) {
gLength++; // increment length of gap
} else if (binaryStr.charAt(j) == 1) {
// if the digit at the j'th position is the end of a gap, add the gap size to list.
if (gLength > 0)
gapLengths.add(gLength);
i++; // increment i pointer
}
} else {
i++; // increment i pointer
}
j++; // increment j pointer
}
Collections.sort(gapLengths);
// Line 45 (ERROR)
int maxGap = gapLengths.get(gapLengths.size() - 1);
return maxGap;
}
I get the following error:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -1
at java.util.ArrayList.elementData(ArrayList.java:400)
at java.util.ArrayList.get(ArrayList.java:413)
at Codility.solution(Codility.java:45)
at Codility.main(Codility.java:15)
I've marked down where line 45 is in the comments. After further investigating (with the debugger), I found out that I get the error because no length(s) seems to be getting added to the ArrayList. Does anybody know why?
I hope this was clear, if not please let me know. I'm not sure if this method would execute in O(log n) time like required, but for now I just want to have something working - then I will think about the time complexity aspect of it.
Big thanks for any help.
The problem is if (binaryStr.charAt(i) == 1). You are comparing char with int.
Replace:
if (binaryStr.charAt(i) == 1)
and
if (binaryStr.charAt(j) == 0)
With:
if (binaryStr.charAt(i) == '1')
and
if (binaryStr.charAt(j) == '0')
Edit: (As pointed out by Andy)
Before doing int maxGap = gapLengths.get(gapLengths.size() - 1);, you need to check if gapLengths.size() > 0 to make sure you have atleast 1 element in the the ArrayList.
I don't want to be annoying, I think the guys have offered great help for your algorithm. I propose, well, I believe it is an easier approach to use
String[] result = binaryStr.split("1");
And then go about just checking the biggest element of the array.
Edit: apparently I missed the part regarding the big O restriction, so I worked a different algorithm:
If you take a look at this page http://www.convertbinary.com/numbers.php
you'll notice that the gap starts at 5 (0 gap) then 9 (00 gap) then 17 (000 gap) etc..(in increasing order), the quick relation I noticed is if you start at 5 then add (5-1=4 to it) you'll get the 00 gap at 9, then 9+8 = 17 (000 gap) etc..
I believe you might be able to come up with a certain fixed calculation to get the best performance out of this without having to do String or Char work.
A simple solution in Swift:
let number = 5101
let binrygap = String(num, radix:2).componentsSeparatedByString("1").map { (a) -> Int in
a.characters.count}.maxElement()
Simple solution 100%
public int solution(final int N) {
//Convert number to Binary string
String bin = Integer.toString(N, 2);
System.out.println("binary equivalent = " + bin);
int gap = 0;
int maxGap = 0;
for (int i = 1; i < bin.length(); i++) {
if (bin.charAt(i) == '0') {
gap++;
}
else if (bin.charAt(i) == '1') {
if (gap > maxGap) {
maxGap = gap;
}
gap = 0;
}
}
return maxGap;
}
Java 8 implementation.
`
class Solution {
public int solution(int N) {
while(N%2 == 0){
N /= 2;
}
String binaryString = Integer.toBinaryString(N);
String[] matches = binaryString.split("1");
Optional maxValueOptional = Arrays.stream(matches).max(String::compareTo);
return maxValueOptional.isPresent()? ((String) maxValueOptional.get()).length():0;
}
}
`
I'm learning algorithms and data structures and I'm now on the part of time and space complexity.
I have to solve a problem and them tell (based on my code) the time and spatial complexity.
This is the code:
public class B {
public static int minSum = -1;
public static void main(String[] args) {
int objects, sumA = 0, sumB = 0;
Scanner readInput = new Scanner(System.in);
objects = readInput.nextInt();
int[] trunk = new int[objects];
if (objects == 0) {
System.out.print(0 + "\n");
} else if (objects == 1) {
trunk[0] = readInput.nextInt();
System.out.print(trunk[0] + "\n");
} else {
for (int i = 0; i < objects; i++) {
trunk[i] = readInput.nextInt();
}
bruteforce(trunk, sumA, sumB, 0);
System.out.println(minSum);
}
}
public static void bruteforce(int[] trunk, int sumA, int sumB, int index) {
int partialDiff;
if (minSum == 0) {
System.out.println(minSum);
System.exit(0);
} else if (index == trunk.length) {
partialDiff = Math.abs(sumA - sumB);
if (partialDiff < minSum || minSum == -1) {
minSum = partialDiff;
}
} else {
bruteforce(trunk, sumA + trunk[index], sumB, index + 1);
bruteforce(trunk, sumA, sumB + trunk[index], index + 1);
}
}
}
Basically the user first inputs a number of objects and then inputs, for each object, its value. The algorithm will distribute the objects by two bags and must calculate the min difference that can be calculated when distributing the objects by the two bags.
I believe that it takes exponential time but I'm struggling with an estimative for the spatial complexity. Can you point me In some direction?
The space complexity is linear - O(n).
You calculate this by multiplying the amount of memory used in each function call by the max recursion depth.
There is a constant amount of memory being used in each function call - just partialDiff and stack information.
To determine the max recursion depth, you can basically just look at index (since this is the variable that decides when it stops recursing deeper).
You call the function with index = 0.
At each recursive call, index increases by one.
As soon as index reaches the size of the array, it stops.
Note that function calls are depth-first, meaning it will completely evaluate the first call to bruteforce before the second call, thus only one will take up memory at a time.
So, for an array of length 2, it goes something like this: (Call 1 is the first function call, Call 2 the second)
Call with index 0
Call 1 with index 1
Call 1 with index 2
Call 2 with index 2
Call 2 with index 1
Call 1 with index 2
Call 2 with index 2
So the max depth (and thus space complexity) is 3, one more than the number of items in the array.
So it's memory used in each function call * max depth = constant * linear = linear.
We have triangle made of blocks. The topmost row has 1 block, the next row down has 2 blocks, the next row has 3 blocks, and so on. Compute recursively (no loops or multiplication) the total number of blocks in such a triangle with the given number of rows.
triangle(0) → 0
triangle(1) → 1
triangle(2) → 3
This is my code:
public int triangle(int rows) {
int n = 0;
if (rows == 0) {
return n;
} else {
n = n + rows;
triangle(rows - 1);
}
}
When writing a simple recursive function, it helps to split it into the "base case" (when you stop) and the case when you recurse. Both cases need to return something, but the recursive case is going to call the function again at some point.
public int triangle(int row) {
if (row == 0) {
return 0;
} else {
return row + triangle(row - 1);
}
}
If you look further into recursive definitions, you will find the idea of "tail recursion", which is usually best as it allows certain compiler optimisations that won't overflow the stack. My code example, while simple and correct, is not tail recursive.
You are not making use of the return value of your function. Instead you always declare a new local variable. Otherwise your solution is quite close to the correct one. Also you should add another return in case you are not at row 0.
public static int triangle (int rows) {
int n = 0;
if (rows == 0) {
return n;
} else {
n = n + rows;
n = n + triangle(rows - 1);
}
return n;
}
I'm trying to make a decent Java program that generates the primes from 1 to N (mainly for Project Euler problems).
At the moment, my algorithm is as follows:
Initialise an array of booleans (or a bitarray if N is sufficiently large) so they're all false, and an array of ints to store the primes found.
Set an integer, s equal to the lowest prime, (ie 2)
While s is <= sqrt(N)
Set all multiples of s (starting at s^2) to true in the array/bitarray.
Find the next smallest index in the array/bitarray which is false, use that as the new value of s.
Endwhile.
Go through the array/bitarray, and for every value that is false, put the corresponding index in the primes array.
Now, I've tried skipping over numbers not of the form 6k + 1 or 6k + 5, but that only gives me a ~2x speed up, whilst I've seen programs run orders of magnitudes faster than mine (albeit with very convoluted code), such as the one here
What can I do to improve?
Edit: Okay, here's my actual code (for N of 1E7):
int l = 10000000, n = 2, sqrt = (int) Math.sqrt(l);
boolean[] nums = new boolean[l + 1];
int[] primes = new int[664579];
while(n <= sqrt){
for(int i = 2 * n; i <= l; nums[i] = true, i += n);
for(n++; nums[n]; n++);
}
for(int i = 2, k = 0; i < nums.length; i++) if(!nums[i]) primes[k++] = i;
Runs in about 350ms on my 2.0GHz machine.
While s is <= sqrt(N)
One mistake people often do in such algorithms is not precomputing square root.
while (s <= sqrt(N)) {
is much, much slower than
int limit = sqrt(N);
while (s <= limit) {
But generally speaking, Eiko is right in his comment. If you want people to offer low-level optimisations, you have to provide code.
update Ok, now about your code.
You may notice that number of iterations in your code is just little bigger than 'l'. (you may put counter inside first 'for' loop, it will be just 2-3 times bigger) And, obviously, complexity of your solution can't be less then O(l) (you can't have less than 'l' iterations).
What can make real difference is accessing memory effectively. Note that guy who wrote that article tries to reduce storage size not just because he's memory-greedy. Making compact arrays allows you to employ cache better and thus increase speed.
I just replaced boolean[] with int[] and achieved immediate x2 speed gain. (and 8x memory) And I didn't even try to do it efficiently.
update2
That's easy. You just replace every assignment a[i] = true with a[i/32] |= 1 << (i%32) and each read operation a[i] with (a[i/32] & (1 << (i%32))) != 0. And boolean[] a with int[] a, obviously.
From the first replacement it should be clear how it works: if f(i) is true, then there's a bit 1 in an integer number a[i/32], at position i%32 (int in Java has exactly 32 bits, as you know).
You can go further and replace i/32 with i >> 5, i%32 with i&31. You can also precompute all 1 << j for each j between 0 and 31 in array.
But sadly, I don't think in Java you could get close to C in this. Not to mention, that guy uses many other tricky optimizations and I agree that his could would've been worth a lot more if he made comments.
Using the BitSet will use less memory. The Sieve algorithm is rather trivial, so you can simply "set" the bit positions on the BitSet, and then iterate to determine the primes.
Did you also make the array smaller while skipping numbers not of the form 6k+1 and 6k+5?
I only tested with ignoring numbers of the form 2k and that gave me ~4x speed up (440 ms -> 120 ms):
int l = 10000000, n = 1, sqrt = (int) Math.sqrt(l);
int m = l/2;
boolean[] nums = new boolean[m + 1];
int[] primes = new int[664579];
int i, k;
while (n <= sqrt) {
int x = (n<<1)+1;
for (i = n+x; i <= m; nums[i] = true, i+=x);
for (n++; nums[n]; n++);
}
primes[0] = 2;
for (i = 1, k = 1; i < nums.length; i++) {
if (!nums[i])
primes[k++] = (i<<1)+1;
}
The following is from my Project Euler Library...Its a slight Variation of the Sieve of Eratosthenes...I'm not sure, but i think its called the Euler Sieve.
1) It uses a BitSet (so 1/8th the memory)
2) Only uses the bitset for Odd Numbers...(another 1/2th hence 1/16th)
Note: The Inner loop (for multiples) begins at "n*n" rather than "2*n" and also multiples of increment "2*n" are only crossed off....hence the speed up.
private void beginSieve(int mLimit)
{
primeList = new BitSet(mLimit>>1);
primeList.set(0,primeList.size(),true);
int sqroot = (int) Math.sqrt(mLimit);
primeList.clear(0);
for(int num = 3; num <= sqroot; num+=2)
{
if( primeList.get(num >> 1) )
{
int inc = num << 1;
for(int factor = num * num; factor < mLimit; factor += inc)
{
//if( ((factor) & 1) == 1)
//{
primeList.clear(factor >> 1);
//}
}
}
}
}
and here's the function to check if a number is prime...
public boolean isPrime(int num)
{
if( num < maxLimit)
{
if( (num & 1) == 0)
return ( num == 2);
else
return primeList.get(num>>1);
}
return false;
}
You could do the step of "putting the corresponding index in the primes array" while you are detecting them, taking out a run through the array, but that's about all I can think of right now.
I wrote a simple sieve implementation recently for the fun of it using BitSet (everyone says not to, but it's the best off the shelf way to store huge data efficiently). The performance seems to be pretty good to me, but I'm still working on improving it.
public class HelloWorld {
private static int LIMIT = 2140000000;//Integer.MAX_VALUE broke things.
private static BitSet marked;
public static void main(String[] args) {
long startTime = System.nanoTime();
init();
sieve();
long estimatedTime = System.nanoTime() - startTime;
System.out.println((float)estimatedTime/1000000000); //23.835363 seconds
System.out.println(marked.size()); //1070000000 ~= 127MB
}
private static void init()
{
double size = LIMIT * 0.5 - 1;
marked = new BitSet();
marked.set(0,(int)size, true);
}
private static void sieve()
{
int i = 0;
int cur = 0;
int add = 0;
int pos = 0;
while(((i<<1)+1)*((i<<1)+1) < LIMIT)
{
pos = i;
if(marked.get(pos++))
{
cur = pos;
add = (cur<<1);
pos += add*cur + cur - 1;
while(pos < marked.length() && pos > 0)
{
marked.clear(pos++);
pos += add;
}
}
i++;
}
}
private static void readPrimes()
{
int pos = 0;
while(pos < marked.length())
{
if(marked.get(pos++))
{
System.out.print((pos<<1)+1);
System.out.print("-");
}
}
}
}
With smaller LIMITs (say 10,000,000 which took 0.077479s) we get much faster results than the OP.
I bet java's performance is terrible when dealing with bits...
Algorithmically, the link you point out should be sufficient
Have you tried googling, e.g. for "java prime numbers". I did and dug up this simple improvement:
http://www.anyexample.com/programming/java/java_prime_number_check_%28primality_test%29.xml
Surely, you can find more at google.
Here is my code for Sieve of Erastothenes and this is actually the most efficient that I could do:
final int MAX = 1000000;
int p[]= new int[MAX];
p[0]=p[1]=1;
int prime[] = new int[MAX/10];
prime[0]=2;
void sieve()
{
int i,j,k=1;
for(i=3;i*i<=MAX;i+=2)
{
if(p[i])
continue;
for(j=i*i;j<MAX;j+=2*i)
p[j]=1;
}
for(i=3;i<MAX;i+=2)
{
if(p[i]==0)
prime[k++]=i;
}
return;
}
In an array first we have to find whether a desired number exists in that or not?
If not then how will I find nearer number to the given desired number in Java?
An idea:
int nearest = -1;
int bestDistanceFoundYet = Integer.MAX_INTEGER;
// We iterate on the array...
for (int i = 0; i < array.length; i++) {
// if we found the desired number, we return it.
if (array[i] == desiredNumber) {
return array[i];
} else {
// else, we consider the difference between the desired number and the current number in the array.
int d = Math.abs(desiredNumber - array[i]);
if (d < bestDistanceFoundYet) {
// For the moment, this value is the nearest to the desired number...
bestDistanceFoundYet = d; // Assign new best distance...
nearest = array[i];
}
}
}
return nearest;
Another common definition of "closer" is based on the square of the difference. The outline is similar to that provided by romaintaz, except that you'd compute
long d = ((long)desiredNumber - array[i]);
and then compare (d * d) to the nearest distance.
Note that I've typed d as long rather than int to avoid overflow, which can happen even with the absolute-value-based calculation. (For example, think about what happens when desiredValue is at least half of the maximum 32-bit signed value, and the array contains a value with corresponding magnitude but negative sign.)
Finally, I'd write the method to return the index of the value located, rather than the value itself. In either of these two cases:
when the array has a length of zero, and
if you add a "tolerance" parameter that bounds the maximum difference you will consider as a match,
you can use -1 as an out-of-band value similar to the spec on indexOf.
//This will work
public int nearest(int of, List<Integer> in)
{
int min = Integer.MAX_VALUE;
int closest = of;
for (int v : in)
{
final int diff = Math.abs(v - of);
if (diff < min)
{
min = diff;
closest = v;
}
}
return closest;
}
If the array is sorted, then do a modified binary search. Basically if you do not find the number, then at the end of search return the lower bound.
Pseudocode to return list of closest integers.
myList = new ArrayList();
if(array.length==0) return myList;
myList.add(array[0]);
int closestDifference = abs(array[0]-numberToFind);
for (int i = 1; i < array.length; i++) {
int currentDifference= abs(array[i]-numberToFind);
if (currentDifference < closestDifference) {
myList.clear();
myList.add(array[i]);
closestDifference = currentDifference;
} else {
if(currentDifference==closestDifference) {
if( myList.get(0) !=array[i]) && (myList.size() < 2) {
myList.add(array[i]);
}
}
}
}
return myList;
Array.indexOf() to find out wheter element exists or not. If it does not, iterate over an array and maintain a variable which holds absolute value of difference between the desired and i-th element. Return element with least absolute difference.
Overall complexity is O(2n), which can be further reduced to a single iteration over an array (that'd be O(n)). Won't make much difference though.
Only thing missing is the semantics of closer.
What do you do if you're looking for six and your array has both four and eight?
Which one is closest?
int d = Math.abs(desiredNumber - array[i]);
if (d < bestDistanceFoundYet) {
// For the moment, this value is the nearest to the desired number...
nearest = array[i];
}
In this way you find the last number closer to desired number because bestDistanceFoundYet is constant and d memorize the last value passign the if (d<...).
If you want found the closer number WITH ANY DISTANCE by the desired number (d is'nt matter), you can memorize the last possibile value.
At the if you can test
if(d<last_d_memorized){ //the actual distance is shorter than the previous
// For the moment, this value is the nearest to the desired number...
nearest = array[i];
d_last_memorized=d;//is the actual shortest found delta
}
A few things to point out:
1 - You can convert the array to a list using
Arrays.asList(yourIntegerArray);
2 - Using a list, you can just use indexOf().
3 - Consider a scenario where you have a list of some length, you want the number closest to 3, you've already found that 2 is in the array, and you know that 3 is not. Without checking the other numbers, you can safely conclude that 2 is the best, because it's impossible to be closer. I'm not sure how indexOf() works, however, so this may not actually speed you up.
4 - Expanding on 3, let's say that indexOf() takes no more time than getting the value at an index. Then if you want the number closest to 3 in an array and you already have found 1, and have many more numbers to check, then it'll be faster to just check whether 2 or 4 is in the array.
5 - Expanding on 3 and 4, I think it might be possible to apply this to floats and doubles, although it would require that you use a step size smaller than 1... calculating how small seems beyond the scope of the question, though.
// paulmurray's answer to your question is really the best :
// The least square solution is way more elegant,
// here is a test code where numbertoLookFor
// is zero, if you want to try ...
import java.util.* ;
public class main {
public static void main(String[] args)
{
int[] somenumbers = {-2,3,6,1,5,5,-1} ;
ArrayList<Integer> l = new ArrayList<Integer>(10) ;
for(int i=0 ; i<somenumbers.length ; i++)
{
l.add(somenumbers[i]) ;
}
Collections.sort(l,
new java.util.Comparator<Integer>()
{
public int compare(Integer n1, Integer n2)
{
return n1*n1 - n2*n2 ;
}
}
) ;
Integer first = l.get(0) ;
System.out.println("nearest number is " + first) ;
}
}
int[] somenumbers = getAnArrayOfSomenumbers();
int numbertoLookFor = getTheNumberToLookFor();
boolean arrayContainsNumber =
new HashSet(Arrays.asList(somenumbers))
.contains(numbertoLookfor);
It's fast, too.
Oh - you wanted to find the nearest number? In that case:
int[] somenumbers = getAnArrayOfSomenumbers();
int numbertoLookFor = getTheNumberToLookFor();
ArrayList<Integer> l = new ArrayList<Integer>(
Arrays.asList(somenumbers)
);
Collections.sort(l);
while(l.size()>1) {
if(numbertoolookfor <= l.get((l.size()/2)-1)) {
l = l.subList(0, l.size()/2);
}
else {
l = l.subList(l.size()/2, l.size);
}
}
System.out.println("nearest number is" + l.get(0));
Oh - hang on: you were after a least squares solution?
Collections.sort(l, new Comparator<Integer>(){
public int compare(Integer o1, Integer o2) {
return (o1-numbertoLookFor)*(o1-numbertoLookFor) -
(o2-numbertoLookFor)*(o2-numbertoLookFor);
}});
System.out.println("nearest number is" + l.get(0));