More efficient way to count single int from array? [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am trying to squeeze every but out of my Java Othello program and have a point where I need to count the number of instances a given number appears. For instance array[]{1,1,2,1,0,1} would count(1) returns 4. Below was an attempt I made at speed by counting all numbers but this was slower:
public void count(int color) {
byte count[] = new byte[3];
for (byte i = 0; i < 64; i++)
++count[state[i]];
return count[color];
}
So far this is the most efficient code I have tested:
public void count(int color) {
byte count = 0;
for (byte i = 0; i < 64; i++)
if (this.get(i) == color)
count++;
return count;
}
Does anyone think they could squeeze some more speed out of this? I only need the count of the number specified, nothing more.

Use int, not byte - internally, Java converts the byte to an int, then increments it, then converts it back to a byte; using an int obviates the need for the type conversions.
You can also try using an AtomicInteger, its getAndIncrement method may be faster than the ++ operator.
You can also unroll your loop; this will reduce the number of times that i < 64 is evaluated. Try using an AtomicInteger for i, and use getAndIncrement instead of ++
for(int i = 0; i < 64;) {
if(this.get(i++) == color) ...
if(this.get(i++) == color) ...
if(this.get(i++) == color) ...
if(this.get(i++) == color) ...
}
Changing the for loop to a do-while loop may be slightly faster - the for loop has a conditional jump and an unconditional jump, but a do-while loop only has a conditional jump.
You can do this in parallel (thread1 counts elements 0-15, thread2 counts elements 16-31, etc), but the cost of creating the threads probably isn't worth it.

try making count int instead of byte some architectures have trouble handling single bytes so a byte is smaller in memory but problematic to make calculations.

1) this.get(i) in version 2 seems suspicious, if we are dealing with array, array[i] is supposed to be more efficient.
2) I would replace
byte count = 0;
for (byte i = 0; i < 64; i++)
...
with
int count = 0;
for (int i = 0; i < 64; i++)
...
otherwise Java will need to promote byte operands to int to do arithmetic and then truncate results to byte.
3) Use http://code.google.com/p/caliper/ to get good benchmarks

You can use collections for this purpose. But your array should be type of Integer because collection don't support primitive types.
public void count(int color) {
List<Integer> asList = Arrays.asList(your_Array);
return Collections.frequency(asList,color);
}

Related

Get max length of row and column in java two dimensional array

What is the best and efficient way to get the maximum i, which is the number of rows and j, which is the number of columns, in a two dimensional array?
Hopefully, the time complexity can be lower than O(n) for every case. No loop here and can still find the maximum j.
For example, if I have an array like this one
[
[18,18,19,19,20,22,22,24,25,26],
[1,2,3],
[0,0,0,0]
]
Then I want to get i = 3 and j = 10 here as a result.
Can anyone help me?
You can avoid writing the loop yourself, but you can't avoid having a runtime of at least O(n), since "someone" needs to loop the source array.
Here is a possible way to do that in Java 8:
Arrays.stream(arr).map(row -> row.length).max(Integer::compare).get();
This returns the maximum length of a "row" in your 2d array:
10
Another version which avoids using the Comparator and therefore might be a bit easier to read:
Arrays.stream(arr).mapToInt(row -> row.length).max().getAsInt();
arr is supposed to be your source array.
Edit: the older version used .max(Integer::max), which is wrong and causes wrong results. See this answer for an explanation.
Assuming your array does not contain null values, you could write something like this:
private static final Comparator<int[]> lengthComparator = new Comparator<int[]> () {
#Override
public int compare(int[] o1, int[] o2) {
return o1.length - o2.length;
}
};
#Test
public void soArrayMaxLength() {
int[][] array = new int[][] {
{18,18,19,19,20, 22, 22, 24, 25,26},
{1,2,3},
{0,0,0,0}
};
int i = array.length;
Optional<int[]> longestArray =
Arrays.stream(array)
.max(lengthComparator);
int j = longestArray.isPresent() ? longestArray.get().length : 0;
System.out.println(String.format("i=%d j=%d", i, j));
}
If you happen to create a parallel stream from the array instead, you could speed up this even further.
Another option is to sort the array by length, the quicksort usually has an average complexity of O(n*log(n)) therefore this isn't faster;
int i = array.length;
Arrays.parallelSort(array, lengthComparator);
int j = array[i-1].length;
System.out.println(String.format("i=%d j=%d", i, j));
Your i is the number of rows, which is simply the length of the 2-D array (assuming you are OK with including empty/null rows in this count).
The max row length j, however, would require iterating over all the rows to find the row i having the maximum arr[i].length.
There will always be a loop1, even though the looping will be implicit in solutions that use Java 8 streams.
The complexity of getting the max number of columns is O(N) where N is the number of rows.
Implicit looping using streams probably will be less efficient than explicit looping using for.
Here's a neat solution using a for loop
int max = o;
for (int i = 0; i < array.length; i++) {
max = Math.max(max, array[i].length);
}
This works in the edge-case where array.length == 0, but if array or any array[i] is null you will get a NullPointerException. (You could modify the code to allow for that, but if the nulls are not expected, an NPE is probably a better outcome.)
1 - In theory, you could unroll the loops for all cases of array.length from 0 to Integer.MAX_VALUE, you would not need a loop. However, the code would not compile on any known Java compiler because it would exceed JVM limits on bytecode segments, etcetera. And the performance would be terrible for various reasons.
You could try this way: loop on the array and find the max length of the arrays which is in this array
byte[][] arrs = new byte[3][];
int maxLength = 0;
for (byte[] array : arrs) {
if (maxLength < array.length) {
maxLength = array.length;
}
}

Efficient way of altering data in an array with threads

I've been trying to figure out the most efficient way where many threads are altering a very big byte array on bit level. For ease of explaining I'll base the question around a multithreaded Sieve of Eratosthenes to ease explaining the question. The code though should not be expected to fully completed as I'll omit certain parts that aren't directly related. The sieve also wont be fully optimised as thats not the direct question. The sieve will work in such a way that it saves which values are primes in a byte array, where each byte contains 7 numbers (we can't alter the first bit due to all things being signed).
Lets say our goal is to find all the primes below 1 000 000 000 (1 billion). As a result we would need an byte array of length 1 000 000 000 / 7 +1 or 142 857 143 (About 143 million).
class Prime {
int max = 1000000000;
byte[] b = new byte[(max/7)+1];
Prime() {
for(int i = 0; i < b.length; i++) {
b[i] = (byte)127; //Setting all values to 1 at start
}
findPrimes();
}
/*
* Calling remove will set the bit value associated with the number
* to 0 signaling that isn't an prime
*/
void remove(int i) {
int j = i/7; //gets which array index to access
b[j] = (byte) (b[j] & ~(1 << (i%7)));
}
void findPrimes() {
remove(1); //1 is not a prime and we wanna remove it from the start
int prime = 2;
while (prime*prime < max) {
for(int i = prime*2; i < max; i = prime + i) {
remove(i);
}
prime = nextPrime(prime); //This returns the next prime from the list
}
}
... //Omitting code, not relevant to question
}
Now we got a basic outline where something runs through all numbers for a certain mulitplication table and calls remove to remove numbers set bits that fits the number to 9 if we found out they aren't primes.
Now to up the ante we create threads that do the checking for us. We split the work so that each takes a part of the removing from the table. So for example if we got 4 threads and we are running through the multiplication table for the prime 2, we would assign thread 1 all in the 8 times tables with an starting offset of 2, that is 4, 10, 18, ...., the second thread gets an offset of 4, so it goes through 6, 14, 22... and so on. They then call remove on the ones they want.
Now to the real question. As most can see that while the prime is less than 7 we will have multiple threads accessing the same array index. While running through 2 for example we will have thread 1, thread 2 and thread 3 will all try to access b[0] to alter the byte which causes an race condition which we don't want.
The question therefore is, whats the best way of optimising access to the byte array.
So far the thoughts I've had for it are:
Putting synchronized on the remove method. This obviously would be very easy to implement but an horrible ideas as it would remove any type of gain from having threads.
Create an mutex array equal in size to the byte array. To enter an index one would need the mutex on the same index. This Would be fairly fast but would require another very big array in memory which might not be the best way to do it
Limit the numbers stored in the byte to prime number we start running on. So if we start on 2 we would have numbers per array. This would however increase our array length to 500 000 000 (500 million).
Are there other ways of doing this in a fast and optimal way without overusing the memory?
(This is my first question here so I tried to be as detailed and thorough as possible but I would accept any comments on how I can improve the question - to much detail, needs more detail etc.)
You can use an array of atomic integers for this. Unfortunately there isn't a getAndAND, which would be ideal for your remove() function, but you can CAS in a loop:
java.util.concurrent.atomic.AtomicIntegerArray aia;
....
void remove(int i) {
int j = i/32; //gets which array index to access
do {
int oldVal = aia.get(j);
int newVal = oldVal & ~(1 << (i%32));
boolean updated = aia.weakCompareAndSet(j, oldVal, newVal);
} while(!updated);
}
Basically you keep trying to adjust the slot to remove that bit, but you only succeed if nobody else modifies it out from under you. Safe, and likely to be very efficient. weakCompareAndSet is basically an abstracted Load-link/Store conditional instruction.
BTW, there's no reason not to use the sign bit.
I think you could avoid synchronizing threads...
For example, this task:
for(int i = prime*2; i < max; i = prime + i) {
remove(i);
}
it could be partitioned in small tasks.
for (int i =0; i < thread_poll; i++){
int totalPos = max/8; // dividing virtual array in bytes
int partitionSize = totalPos /thread_poll; // dividing bytes by thread poll
removeAll(prime, partitionSize*i*8, (i + 1)* partitionSize*8);
}
....
// no colisions!!!
void removeAll(int prime, int initial; int max){
k = initial / prime;
if (k < 2) k = 2;
for(int i = k * prime; i < max; i = i + prime) {
remove(i);
}
}

Is this a correct BubbleSort Algorithm? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I wrote the following code to sort the elements in the array values using a BubbleSort. Is this correct or is there anything missing? My test cases are good, but maybe it's the test cases that are also missing something.
public void sort(ValuePair[] values) {
ValuePair value = null;
for (int i = 0; i < values.length; i++) {
for (int j = 1 + i; j < values.length; j++) {
if (values[i].getValue() > values[j].getValue()) {
value = values[j];
values[j] = values[i];
values[i] = value;
}
}
}
}
Your code is correct in that it will sort the array. However it will always require N*(N-1) passes over the array. This is not
the typical algorithm used to implement
a bubble sort. The typical algorithm uses repeat loop with a test for sorted. This is somewhat more efficient because it
terminates as soon as the array is sorted (consider the case where you start with a sorted array).
Read the Wikepedia article on bubble sort it demonstrates this very well.
A somewhat improved version pseudocode version of Bubble Sort goes something like this:
procedure bubbleSort( A : list of sortable items )
n = length(A)
repeat
swapped = false
for i = 1 to n-1 inclusive do
if A[i-1] > A[i] then
swap(A[i-1], A[i])
swapped = true
end if
end for
n = n - 1
until not swapped
end procedure
The lesson here is that while your algorithm and the Wikepedia algorithm both have the same big O characteristics, a small change
in the way they have been implemented can make a significant difference in their actual performance characteristics.

Improvement of Algorithm: Counting set bits in Byte-Arrays

We store knowledge in byte arrays as bits. Counting the number of set bits is pretty slow. Any suggestion to improve the algorithm is welcome:
public static int countSetBits(byte[] array) {
int setBits = 0;
if (array != null) {
for (int byteIndex = 0; byteIndex < array.length; byteIndex++) {
for (int bitIndex = 0; bitIndex < 7; bitIndex++) {
if (getBit(bitIndex, array[byteIndex])) {
setBits++;
}
}
}
}
return setBits;
}
public static boolean getBit(int index, final byte b) {
byte t = setBit(index, (byte) 0);
return (b & t) > 0;
}
public static byte setBit(int index, final byte b) {
return (byte) ((1 << index) | b);
}
To count the bits of a byte array of length of 156'564 takes 300 ms, that's too much!
Try Integer.bitcount to obtain the number of bits set in each byte. It will be more efficient if you can switch from a byte array to an int array. If this is not possible, you could also construct a look-up table for all 256 bytes to quickly look up the count rather than iterating over individual bits.
And if it's always the whole array's count you're interested in, you could wrap the array in a class that stores the count in a separate integer whenever the array changes. (edit: Or, indeed, as noted in comments, use java.util.BitSet.)
I would use the same global loop but instead of looping inside each byte I would simply use a (precomputed) array of size 256 mapping bytes to their bit count. That would probably be very efficient.
If you need even more speed, then you should separately maintain the count and increment it and decrement it when setting bits (but that would mean a big additional burden on those operations so I'm not sure it's applicable for you).
Another solution would be based on BitSet implementation : it uses an array of long (and not bytes) and here's how it counts :
658 int sum = 0;
659 for (int i = 0; i < wordsInUse; i++)
660 sum += Long.bitCount(words[i]);
661 return sum;
I would use:
byte[] yourByteArray = ...
BitSet bitset = BitSet.valueOf(yourByteArray); // java.util.BitSet
int setBits = bitset.cardinality();
I don't know if it's faster, but I think it will be faster than what you have. Let me know?
Your method would look like
public static int countSetBits(byte[] array) {
return BitSet.valueOf(array).cardinality();
}
You say:
We store knowledge in byte arrays as bits.
I would recommend to use a BitSet for that. It gives you convenient methods, and you seem to be interested in bits, not bytes, so it is a much more appropriate data type compared to a byte[]. (Internally it uses a long[]).
By far the fastest way is counting bits set, in "parallel", method is called Hamming weight
and is implemented in Integer.bitCount(int i) as far as I know.
As per my understaning,
1 Byte = 8 Bits
So if Byte Array size = n , then isn't total number of bits = n*8 ?
Please correct me if my understanding is wrong
Thanks
Vinod

Arrays manipulation with respect to range of values

I have this requirement that I need to set the values in a byte array of size 20MB.
I'm looking for a JAVA API which does the following. I've gone through apache commons arrayutils but couldn't find something useful.
The operation should be something of this type. Say the values range from 0 to 100.
I'd like to manipulate the array such that values less than 15 are changed to 15 and values greater than 70 are changed to 70.
Basically, I'm looking for an operation which would avoid me doing this - iterate through the array, check if the value is below 15, if it is below 15 then set it to 15 otherwise is it above 75, if it is above 75 then set the value to 75.
Any help is appreciated.
Even if there's some third-party library which has this functionality, it's just going to be doing exactly the same operation - looping over an array. Fundamentally you need something like:
for (int i = 0; i < array.length; i++)
{
array[i] = clamp(array[i], 15, 70);
}
...
public static byte clamp(byte value, byte min, byte max)
{
return value < min ? min
: value > max ? max
: value;
}
You could implement this in native code if you really wanted, but I suspect you won't find an existing implementation. It's more likely that there are libraries which perform the sort of image manipulation you're interested in as image manipulation rather than as an array operation.
You could use Guava's Lists.transform method to update the values. However, this would result in a new array not updating the values in the existing array.
List<Byte> list = Lists.newArrayList(myArray);
List<Byte> trans = Lists.transform(list, new Function<Byte, Byte>(){...});
byte[] bytes = Bytes.toArray(trans);
However, given what you are trying to do, I would suggest just looping over the values.
I'd recommend that you write the simple loop, and profile it in the context of your application. Only if you can demonstrate that this code is the overall bottleneck it would make sense to try and make it faster.
I'd try something like this:
final int n = array.length;
for (int i = 0; i < n; i++) {
int val = array[i];
if (val < 15) {
array[i] = 15;
} else if (val > 75) {
array[i] = 75;
}
}
My final point is that this type of code is likely to be limited by memory bandwidth, so it seems unlikely that a native C solution would be a lot faster anyway.
Instead of checking the ranges like Jon skeet proposes, you could create a lookup table for each of the 256 possible a byte could have, i.e. something like
{15,15,15,15,15,15,15,15,15,15,15,15,15,15,15,16,17,18,...,69,70,70,70,70,...}
for (int i = 0; i < len; i++)
{
array[i] = lookup[array[i]];
}
In C: Less branching, much faster. In Java: Unfortunately not faster, even a bit slower, maybe because Java's array range checks eat up the speed gained; and since Java's bytes are always signed, it's a bit more complicated than shown above.
In C, you could even do that for 16bit halfwords, making it faster again. (Probably by factor 2)
EDIT: To my own shame, I must admit that proper testing revealed that the lookup table isn't faster in C. My first results were probably skewed by compiler optimisations. Anyway, at least on my machine,
if (array[i]<15) array[i]=15;
else if (array[i]>70) array[i]=70;
is noticable faster then using the ternary operator.

Categories