This is an odd question. I have an integer array in Java, where each int represents a color. They will either be 0xFFFFFFFF or 0x0. What would be the FASTEST way to find if this array contains ANY values equal to 0xFFFFFFFF?
This is my current code:
int length = w * h;
for (int i = 0; i < length; i++) {
if (pixels[i] == 0xFFFFFFFF) {
return true;
}
}
I have no clue if there is a faster way to do this or not. I imagine you vets could have a trick or two though.
EDIT: Seeing as it is just a dumb array of pixels from Bitmap.getPixels(), there's no way it would be sorted or transformed to another storage structure. Thanks for the input, everyone, it seems like looping through is the best way in this case.
No, there is no faster way unless the array of integers is already sorted, which I doubt given it's an array of colours.
To scan through an unsorted array takes linear time "O(n)". That's what you do, and you exit the method as soon as a match is found which is good too.
Without switching to some other data structure, no, there is no better way to find whether the array contains that value. You have to look at all the array elements to see if it's there, since if you don't check some particular location you might miss the one copy of that pixel color.
That said, there are alternative ways that you could solve this problem. Here are a few thoughts on how to speed this up:
If every value is guaranteed to be either white or black, you could store two extra boolean values alongside the array representing whether there are white or black pixels. That way, once you've run the scan once, you could just read the booleans back. You could also store a count of the number of white and black pixels along with the array, and then whenever you write a pixel update the count by decrementing the number of pixels of the original color and incrementing the number of pixels of the new color. This would then give you the ability to check if a pixel of a given color exists in O(1) by just seeing if the correct counter is nonzero.
Alternatively, if you happen to know something about the image (perhaps where the white and black pixels ought to be), you could consider doing the iteration in a different order. For example, if the pixels you're looking for tend to be clustered in the center of the image, rewriting the loop to check there first might be a good idea since if there are any pixels of that type you'll find them more rapidly. This still has the same worst-case behavior, but for "realistic" images might be much faster.
If you have multiple threads available and the array is really huge (millions of elements), you could consider having multiple threads each search a part of the array for the value. This would only be feasible if you had a reason to suspect that most of the image was not white.
Since in most realistic images you might assume that the image is a mixture of colors and you're just looking for something of one color, then you might want to consider storing the image as a sparse array, where you store a list of the pixels that happen to be of one color (say, white) and then assume everything else is black. If you expect most images to be a solid color with a few outliers, this might be a very good representation. Additionally, it would give you constant-time lookup of whether any black or white pixels exist - just check if the list of set pixels is empty or consists of the entire image.
If the order doesn't matter, you could also store the elements in some container like a hash table, which could give you O(1) lookup of whether or not the element is there. You could also sort the array and then just check the endpoints.
As a microoptimization, you could consider always appending to the real image two values - one white pixel and one black pixel - so that you could always iterate until you find the value. This eliminates one of the comparisons from the loop (the check to see if you're in-bounds) and is recommended by some authors for very large arrays.
If you assume that most images are a nice mixture of white and black and are okay with getting the wrong answer a small fraction of the time, you could consider probing a few random locations and checking if any of them are the right color. If so, then clearly a pixel of the correct color exists and you're done. Otherwise, run the full linear scan. For images that are a nice blend of colors, this could save you an enormous amount of time, since you could probe some small number of locations (say, O(log n) of them) and end up avoiding a huge linear scan in many cases. This is exponentially faster than before.
If every value is either white or black, you could also consider storing the image in a bitvector. This would compress the size of the array by a factor of the machine word size (probably between 32-128x compression) You could then iterate across the compressed array and see if any value is not identically equal to 0 to see if any of the pixels are white. This also saves a huge amount of space, and I'd actually suggest doing this since it makes a lot of other operations easy as well.
Hope this helps!
It doesn't matter at the bytecode level, but at the native-code level,
if (pixels[i] != 0)
is likely to be a bit faster, given that you're sure only these two values can appear.
If your array is really big, it might be worth it to divide and conquer. That is, assign segments of the data to multiple threads (probably t threads where t is the number of available processor cores). With a sufficiently large data set, the parallelism may amortize the thread startup cost.
Here is the simple optimization that helps on large arrays: put the requested value at the end of the array and thus eliminate array bounds check. (templatetypedef has already mentioned this optimization.) This solution saves 25% of loop running time and it is good for large arrays:
tmp = a[n - 1]
a[n - 1] = 0xFFFFFFFF
pos = 0
while a[pos] != 0xFFFFFFFF
pos = pos + 1
a[n - 1] = tmp
if a[pos] = 0xFFFFFFFF then
return pos
return -1
There is the C# implementation with running time analysis on this address.
The only scope for improving the performance is the comparison. I feel bitwise operator would be a bit faster than the conditional operator.
You could do this
int length = w * h;
for (int i = 0; i < length; i++) {
if (pixels[i] & 0xFFFFFFFF) {
return true;
}
}
Can't you check when you insert the color into the array? If so, you could store the index of the array's element which contains the 0xFFFFFFFF color. Since you want "ANY" entry that has such value, this should do the trick :D
If not, your answer has the complexity of O(n) which is the best it could be, since the array isn't (and cannot be, as you say) ordered.
using the build-in foreach is a tad faster than the indexed for as id eliminates a bound check
for(int pix:pixels){
if(pix!=0)
return true;
}
Arrays.asList(...).contains(...)
Related
Context
I am implementing a seam carving algorithm.
I am representing the pixels in a picture as a 1D array
private int[] picture;
Each int represents the RGB of the pixel.
To access the pixels I use helper methods such as:
private int pixelToIndex(int x, int y) {return (y * width()) + x;}
The alternative would be to store in a 2D array:
private int[][] picture;
The seam carving algorithm has two parts.
Firstly, it does some image processing where it finds the horizontal or vertical connected seam with lowest energy. Here the pixel accesses jump around a bit across rows.
Secondly it removes this connected seam.
For vertical seams I mark the pixel to be removed with -1 and create a new picture array skipping the removed pixels like so:
int i = 0, j = 0;
while (i < temp.length) {
if (picture[j] != -1) {
temp[i++] = picture[j];
}
j++;
}
picture = temp;
For horizontal seams, given a specific column I shift all the pixels after the deleted pixel of that column up by one row as so:
for (int i = 0; i < temp.length; i++) {
int row = indexToY(i);
int col = indexToX(i);
int deletedCell = seam[col];
if (row >= deletedCell) temp[i] = picture[i + width()];
else temp[i] = picture[i];
}
picture = temp;
The question
Obviously the 1D array uses less physical memory because of the overhead for each subarray but given the way I am iterating the matrix would the 2D array be more effectively cached by the CPU and thus more efficient?
How would the arrays differ in the way they would be loaded into the CPU cache and RAM? Would part of the 1D array go into the L1-cache? How would the 1D and 2D array be loaded into memory? Would it be dependent on size of the array?
An array of ints is just represented just as that: an array of int values. An array of arrays ... adds certain overhead. So, short answer: when dealing with really large amounts of data; plain 1-dimensional arrays are your friend.
On the other hand: only start optimizing after understanding the bottlenecks. You know, it doesn't help much to optimize your in-memory-datastructure ... when your application spends most of its time waiting for IO for example. And if your attempts to write "high performance" code yield "complicated, hard to read, thus hard to maintain" code ... you might have focused on the wrong area.
Besides: concrete performance numbers are affected by many different variables. So you want to do profiling first; and see what happens with different hardware, different data sets, and so on.
And another side note: sometimes, for the real number crunching; it can also be a viable option to implement something in C++ can make calls via JNI. It really depends on the nature of your problem; how often things will be used; response times expected by users; and so on.
Java has arrays of arrays for multi-dimensional arrays. In your case int[][] is an array of int[] (and of course int[] is an array of int). So, matrix is represented as a set of rows and pointers for each row. In this case it means that NxM matrix is occupying NxM for data and an array of pointers.
Since you can represent any matrix as an array you'll get less memory consumption storing it that way.
On the other hand address manipulation in case representing a 2D matrix as an arary is not that complex.
If we assume that you have a matrix that is NxM accessing and an Array with size NxM representing this matrix, yo can access element of Matrix[x,y] as Array[x*n+y].
Array[i] is compact and it has a high probability of being in L1 cache, or even in register cache.
Matrix[x,y] requires one memory read and addition
Array[x*n+y] requires one multiplication and one addition.
So, I'll put my two cents on Array, but anyway it has to be tested (don't forget to wait for warming time for JIT compiler)
I am facing a problem where for a number of words, I make a call to a HashMultimap (Guava) to retrieve a set of integers. The resulting sets have, say, 10, 200 and 600 items respectively. I need to compute the intersection of these three (or four, or five...) sets, and I need to repeat this whole process many times (I have many sets of words). However, what I am experiencing is that on average these set intersections take so long to compute (from 0 to 300 ms) that my program takes a very long time to complete if I look at hundreds of thousands of sets of words.
Is there any substantially quicker method to achieve this, especially given I'm dealing with (sortable) integers?
Thanks a lot!
If you are able to represent your sets as arrays of bits (bitmaps), you can intersect them with AND operations. You could even implement this to run in parallel.
As an example (using jlordo's question): if set1 is {1,2,4} and set2 is {1,2,5}
Then your first set would be represented as: 00010110 (bits set for 1, 2, and 4).
Your second set would be represented as: 00100110 (bits set for 1, 2, and 5).
If you AND them together, you get: 00000110 (bits set for 1 and 2)
Of course, if you had a larger range of integers, then you will need more bytes. The beauty of bitmap indexes is that they take just one bit per possible element, thus occupying a relatively small space.
In Java, for example, you could use the BitSet data structure (not sure if it can do operations in parallel, though).
One problem with a bitmap based solution is that even if the sets themselves are very small, but contain very large numbers (or even unbounded) checking bitmaps would be very wasteful.
A different approach would be, for example, sorting the two sets, merging them and checking for duplicates. This can be done in O(nlogn) time complexity and extra O(n) space complexity, given set sizes are O(n).
You should choose the solution that matches your problem description (input range, expected set sizes, etc.).
The post http://www.censhare.com/en/aktuelles/censhare-labs/yet-another-compressed-bitset describes an implementation of an ordered primitive long set with set operations (union, minus and intersection). To my experience it's quite efficient for dense or sparse value populations.
I have a direct buffer holding Integers that are already sorted (i.e. 1,1,3,3,3,3,7,7,....). Most values will occur multiple times. I want to find the first position of values I search for.
Is there a search functionality directly working of buffers
built-into Java? (couldn't find anything)
If not, is there any decent library providing such functionality?
If not, what search algorithm would recommend for implementation, given that:
I will typically have millions of entries in my buffer
Speed is very important
It must return the first occurrence of the searched number
I'd rather not have it modify the data as I will need the original data afterwards
EDIT: Thanks to all the posters suggesting Arrays.binarySearch(), but, as far as I know, direct buffers do not generally have a backing array. That's why I was looking for an implementation that directly works on the buffer.
Also, each value can occur up to a thousand times, therefore a linear search after finding a landing point might not be very efficient. The comparator suggestion of dasblinkenlight might work though.
The best approach would be to code your own implementation of Binary Search for the buffers. This approach carefully avoids potential performance hits associated with creating views, copying large arrays etc., and stays compact at the same time.
The code sample at the link returns the rightmost point; you need to replace > with >= on the nums[guess] > check line to get the leftmost point. This saves you potentially costly backward linear search, or using a "backward" Comparator, which requires wrapping your int into Integer objects.
Use Binary search algorithm
ByteBuffer buffer = createByteBuffer();
IntBuffer intBuffer = buffer.asIntBuffer();
If byte array can be converted to int array use:
int [] array = intBuffer.array();
int index = java.util.Arrays.binarySearch(array,7);
I don't know about a built-in functionality for buffers (Arrays.binarySearch(...) would require you to convert the buffer to an array) but as for 3.: since the buffer is already sorted a binary search might be useful. If you found the value you could then check the previous values to get the start of that sequence.
You'll probably have to write your own binary search: one that always moves to the left if the value checked is equal to the one searched.
So effectively instead of x, you're going to search for x-ε. Your algorithm will always take exactly logn (or logn+1) steps, as it will always "fail", but it will give you the index of the first element that is bigger than x-ε. All you need to do is check if that element is x, and if it is, you've found your match, if it isn't, there's no x in your buffer.
I have a function named resize, which takes a source array, and resizes to new widths and height. The method I'm using, I think, is inefficient. I heard there's a better way to do it. Anyway, the code below works when scale is an int. However, there's a second function called half, where it uses resize to shrink an image in half. So I made scale a double, and used a typecast to convert it back to an int. This method is not working, and I dont know what the error is (the teacher uses his own grading and tests on these functions, and its not passing it). Can you spot the error, or is there a more efficient way to make a resize function?
public static int[][] resize(int[][] source, int newWidth, int newHeight) {
int[][] newImage=new int[newWidth][newHeight];
double scale=newWidth/(source.length);
for(int i=0;i<newWidth/scale;i++)
for(int j=0;j<newHeight/scale;j++)
for (int s1=0;s1<scale;s1++)
for (int s2=0;s2<scale;s2++)
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
return newImage;
}
/**
* Half the size of the image. This method should be just one line! Just
* delegate the work to resize()!
*/
public static int[][] half(int[][] source) {
int[][] newImage=new int[source.length/2][source[0].length/2];
newImage=resize(source,source.length/2,source[0].length/2);
return newImage;
}
So one scheme for changing the size of an image is to resample it (technically this is really the only way, every variation is really just a different kind of resampling function).
Cutting an image in half is super easy, you want to read every other pixel in each direction, and then load that pixel into the new half sized array. The hard part is making sure your bookkeeping is strong.
static int[][] halfImage(int[][] orig){
int[][] hi = new int[orig.length/2][orig[0].length/2];
for(int r = 0, newr = 0; r < orig.length; r += 2, newr++){
for(int c = 0, newc = 0; c < orig[0].length; c += 2, newc++){
hi[newr][newc] = orig[r][c];
}
}
return hi;
}
In the code above I'm indexing into the original image reading every other pixel in every other row starting at the 0th row and 0th column (assuming images are row major, here). Thus, r tells us which row in the original image we're looking at, and c tells us which column in the original image we're looking at. orig[r][c] gives us the "current" pixel.
Similarly, newr and newc index into the "half-image" matrix designated hi. For each increment in newr or newc we increment r and c by 2, respectively. By doing this, we skip every other pixel as we iterate through the image.
Writing a generalized resize routine that doesn't operate on nice fractional quantities (like 1/2, 1/4, 1/8, etc.) is really pretty hard. You'd need to define a way to determine the value of a sub-pixel -- a point between pixels -- for more complicated factors, like 0.13243, for example. This is, of course, easy to do, and you can develop a very naive linear interpolation principle, where when you need the value between two pixels you simply take the surrounding pixels, construct a line between their values, then read the sub-pixel point from the line. More complicated versions of interpolation might be a sinc based interpolation...or one of many others in widely published literature.
Blowing up the size of the image involves something a little different than we've done here (and if you do in fact have to write a generalized resize function you might consider splitting your function to handle upscaling and downscaling differently). You need to somehow create more values than you have originally -- those interpolation functions work for that too. A trivial method might simply be to repeat a value between points until you have enough, and slight variations on this as well, where you might take so many values from the left and so many from the right for a particular position.
What I'd encourage you to think about -- and since this is homework I'll stay away from the implementation -- is treating the scaling factor as something that causes you to make observations on one image, and writes on the new image. When the scaling factor is less than 1 you generally sample from the original image to populate the new image and ignore some of the original image's pixels. When the scaling factor is greater than 1, you generally write more often to the new image and might need to read the same value several times from the old image. (I'm doing a poor job highlighting the difference here, hopefully you see the dualism I'm getting at.)
What you have is pretty understandable, and I think it IS an O(n^4) algorithm. Ouchies!
You can improve it slightly by pushing the i*scale and j*scale out of the inner two loops - they are invariant where they are now. The optimizer might be doing it for you, however. There are also some other similar optimizations.
Regarding the error, run it twice, once with an input array that's got an even length (6x6) and another that's odd (7x7). And 6x7 and 7x6 while you're at it.
Based on your other question, it seems like you may be having trouble with mixing of types - with numeric conversions. One way to do this, which can make your code more debuggable and more readable to others not familiar with the problem space, would be to split the problematic line into multiple lines. Each minor operation would be one line, until you reach the final value. For example,
newImage[(int)(i*scale+s1)][(int)(j*scale+s2)] =source[i][j];
would become
int x = i * scale;
x += s1;
int y = j* scale;
y +=s2;
newImage[x][y] = source[i][j];
Now, you can run the code in a debugger and look at the values of each item after each operation is performed. When a value doesn't match what you think it should be, look at it and figure out why.
Now, back to the suspected problem: I expect that you need to use doubles somewhere, not ints - in your other question you talked about scaling factors. Is the factor less than 1? If so, when it's converted to an int, it'll be 0, and you'll get the wrong result.
I need to store a 2d matrix containing zip codes and the distance in km between each one of them. My client has an application that calculates the distances which are then stored in an Excel file. Currently, there are 952 places. So the matrix would have 952x952 = 906304 entries.
I tried to map this into a HashMap[Integer, Float]. The Integer is the hash code of the two Strings for two places, e.g. "A" and "B". The float value is the distance in km between them.
While filling in the data I run into OutOfMemoryExceptions after 205k entries. Do you have a tip how I can store this in a clever way? I even don't know if it's clever to have the whole bunch in memory. My options are SQL and MS Access...
The problem is that I need to access the data very quickly and possibly very often which is why I chose the HashMap because it runs in O(1) for the look up.
Thansk for your replies and suggestions!
Marco
A 2d array would be more memory efficient.
You can use a small hashmap to map the 952 places into a number between 0 and 951 .
Then, just do:
float[][] distances= new float[952][952];
To look things up, just use two hash lookups to convert the two places into two integers, and use them as indexes into the 2d array.
By doing it this way, you avoid the boxing of floats, and also the memory overhead of the large hashmap.
However, 906304 really isn't that many entries, you may just need to increase the Xmx maximum heap size
I would have thought that you could calculate the distances on the fly. Presumably someone has already done this, so you simply need to find out what algorithm they used, and the input data; e.g. longitude/latitude of the notional centres of each ZIP code.
EDIT: There are two commonly used algorithms for finding the (approximate) geodesic distance between two points given by longitude/latitude pairs.
The Vicenty formula is based on an ellipsoid approximation. It is more accurate, but more complicated to implement.
The Haversine formula is based on a spherical approximation. It is less accurate (0.3%), but simpler to implement.
Can you simply boost the memory available to the JVM ?
java -Xmx512m ...
By default the maximum memory configuration is 64Mb. Some more tuning tips here. If you can do this then you can keep the data in-process and maximise the performance (i.e. you don't need to calculate on the fly).
I upvoted Chi's and Benjamin's answers, because they're telling you what you need to do, but while I'm here, I'd like to stress that using the hashcode of the two strings directly will get you into trouble. You're likely to run into the problem of hash collisions.
This would not be a problem if you were concatenating the two strings (being careful to use a delimiter which cannot appear in the place designators), and letting HashMap do its magic, but the method you suggested, using the hashcodes for the two strings as a key, that's going to get you into trouble.
You will simply need more memory. When starting your Java process, kick it off like so:
java -Xmx256M MyClass
The -Xmx defines the max heap size, so this says the process can use up to 256 MB of memory for the heap. If you still run out, keep bumping that number up until you hit the physical limit.
Lately I've managed similar requisites for my master thesis.
I ended with a Matrix class that uses a double[], not a double[][], in order to alleviate double deref costs (data[i] that is an array, then array[i][j] that is a double) while allowing the VM to allocate a big, contiguous chunk of memory:
public class Matrix {
private final double data[];
private final int rows;
private final int columns;
public Matrix(int rows, int columns, double[][] initializer) {
this.rows = rows;
this.columns = columns;
this.data = new double[rows * columns];
int k = 0;
for (int i = 0; i < initializer.length; i++) {
System.arraycopy(initializer[i], 0, data, k, initializer[i].length);
k += initializer[i].length;
}
}
public Matrix set(int i, int j, double value) {
data[j + i * columns] = value;
return this;
}
public double get(int i, int j) {
return data[j + i * columns];
}
}
this class should use less memory than an HashMap since it uses a primitive array (no boxing needed): it needs only 906304 * 8 ~ 8 Mb (for doubles) or 906304 * 4 ~ 4 Mb (for floats). My 2 cents.
NB
I've omitted some sanity checks for simplicity's sake
Stephen C. has a good point: if the distances are as-the-crow-flies, then you could probably save memory by doing some calculations on the fly. All you'd need is space for the longitude and latitude for 952 zip codes and then you could use the vicenty formula to do your calculation when you need to. This would make your memory usage O(n) in zipcodes.
Of course, that solution makes some assumptions that may turn out to be false in your particular case, i.e. that you have longitude and latitude data for your zipcodes and that you're concerned with as-the-crow-flies distances and not something more complicated like driving directions.
If those assumptions are true though, trading a few computes for a whole bunch of memory might help you scale in the future if you ever need to handle a bigger dataset.
The above suggestions regarding heap size will be helpful. However, I am not sure if you gave an accurate description of the size of your matrix.
Suppose you have 4 locations. Then you need to assess the distances between A->B, A->C, A->D, B->C, B->D, C->D. This suggests six entries in your HashMap (4 choose 2).
That would lead me to believe the actual optimal size of your HashMap is (952 choose 2)=452,676; NOT 952x952=906,304.
This is all assuming, of course, that you only store one-way relationships (i.e. from A->B, but not from B->A, since that is redundant), which I would recommend since you are already experiencing problems with memory space.
Edit: Should have said that the size of your matrix is not optimal, rather than saying the description was not accurate.
Create a new class with 2 slots for location names. Have it always put the alphabetically first name in the first slot. Give it a proper equals and hashcode method. Give it a compareTo (e.g. order alphabetically by names). Throw them all in an array. Sort it.
Also, hash1 = hash2 does not imply object1 = object2. Don't ever do this. It's a hack.