How can I process BufferedImage faster - java

I'm making a basic image editor to improve my image process skills. I have 12 filters(for now).
All filters have a clickable JLabel that has image
I update images of all of them when all filters apply with this function:
public static void buttonImagesUpdater(){
for(int i = 0; i < effects.size(); i++){
effects.get(i).getButton().setImage(new ImageIcon(effects.get(i).process(image)));
}
}
All filters have a process function like that:
public BufferedImage process(BufferedImage base) {
BufferedImage product = new BufferedImage(base.getWidth(), base.getHeight(), base.getType());
for(int indisY = 0; indisY < base.getHeight(); indisY++){
for(int indisX = 0; indisX < base.getWidth(); indisX++){
Color currentColor = new Color(base.getRGB(indisX, indisY));
int greyTone = 0;
greyTone = (int) (currentColor.getRed()*0.315) +
(int) (currentColor.getGreen()*0.215)
+ (int) (currentColor.getBlue()*0.111);
product.setRGB(indisX, indisY, new Color(greyTone,greyTone,greyTone).getRGB());
}
}
return product;
}
Program works so slowly. When I click an effect's button it done 45 second later when I use 5000x3000 image. How can I fix this performance problem?

You have got to remember that 3000 * 5000 is 15,000,000 so you're creating 15,000,000 Color objects, you're calling setRGB 15,000,000 times. If I were you, I would look into potentially using ForkJoinPool for this.

I agree with #Jason - the problem is that you're creating (and destroying) 15 million Color objects.
However, I don't think that just using multiple threads is going to get you enough of a performance increase, because you are still going to be putting a lot of pressure on memory and the garbage collector, since you'll still be creating and destroying 15 million objects, you'll just be doing several in parallel.
I think that you can both stay away from creating Color objects entirely, and make fewer loops, by using the result of the BufferedImage class' getRGB() method directly, instead of creating a Color object. Further, you can use the overload of getRGB() that returns an array of ints, to get, say, a row of pixels (or more) at a time, to reduce the number of calls that you have to make inside the loop. You can similarly use the version of setRGB() that takes an array of pixel values.
The trick is to be able to convert the int color value to a gray value (or whatever else you need to do) without separating the R, G, and B values, or finding an efficient way to separate R, G, and B - more efficient than creating, using, and destroying a Color object.
For a lead on getting R, G, and B values from the int returned by getRGB(), note that the documentation for Color.getRGB() says,
"Returns the RGB value representing the color in the default sRGB
ColorModel. (Bits 24-31 are alpha, 16-23 are red, 8-15 are green, 0-7
are blue)."
Once you have that working, you can think about parallelizing it.

You might try this to see if things speed up a little.
This uses a DataBuffer from an Image Raster.
And uses a map to retain previous converted colors. It may help over a period of time depending on the type of image.
And works with doubles since the data buffer supports various types.
I also multiplied your values by powers of 2 to shift them in the proper position. The get* methods of Color return values between 0 and 255 inclusive. The RGB occupy lower 24 bits on an int (Alpha is in the left most byte).
All I see is a dark image but I tested this with other parameters and I know it works. The long pole in the tent seems to be reading and writing the images. I was using a 6637 3787 image and could read, alter, and write it in 12 seconds. For more advanced processing you may want to check on AffineTransformOp.
static Map<Color, Double> colorMap = new HashMap<>();
public BufferedImage process(BufferedImage base) {
DataBuffer db = base.getRaster().getDataBuffer();
for (int i = 0; i < db.getSize(); i++) {
Color currentColor = new Color(db.getElem(i));
double greyTone = colorMap.computeIfAbsent(currentColor, v->
currentColor.getRed() * .315 *256*256
+ currentColor.getGreen() *.215 * 256
+ currentColor.getBlue()*.115);
db.setElemDouble(i, greyTone);
}
return base;
}

Related

Java Library To Animated Movement Between Coordinates

I am working on an application that deals with moving objects from point A to point B in a 2D space. The job of the application is to animate that transition in a given number to steps (frames).
What I am currently doing is divide the distance by the number steps, hence creating a very linear and boring movement in a straight line:
int frames = 25;
int fromX = 10;
int toX = 20;
double step = (toX - fromX) / frames;
List<Double> values = new ArrayList<>();
int next = start;
for (int i = 0; i < frames; i++) {
values.add(next);
next += step;
}
As a first improvement - since my poor users have to look at this misery - I would like that to be an accelerated and decelerated movement starting slow, picking up speed, then getting slower again until arrival at the destination.
For that particular case, I could probably figure out the math somehow but in the end, I want to be able to provide more complex animations that would go beyond my capabilities as a mathematician ;) I have many of the capabilities of e.g. PowerPoint or iMovie in mind.
My ask is: Is there a library that would allow me to generated these sequences of coordinates? I found a few things but they where often tied to some Graphics object etc which I am not using. For me it's all about Lists of Doubles.

How to efficiently remove duplicate collision pairs in spatial hash grid?

I'm working on a 2D game for android so performance is a real issue and a must. In this game there might occur a lot of collisions between any objects and I don't want to check in bruteforce o(n^2) whether any gameobject collides with another one. In order to reduce the possible amount of collision checks I decided to use spatial hashing as broadphase algorithm becouse it seems quite simple and efficient - dividing the scene on rows and columns and checking collisions between objects residing only in the same grid element.
Here's the basic concept I quickly scratched:
public class SpatialHashGridElement
{
HashSet<GameObject> gameObjects = new HashSet<GameObject>();
}
static final int SPATIAL_HASH_GRID_ROWS = 4;
static final int SPATIAL_HASH_GRID_COLUMNS = 5;
static SpatialHashGridElement[] spatialHashGrid = new SpatialHashGridElement[SPATIAL_HASH_GRID_ROWS * SPATIAL_HASH_GRID_COLUMNS];
void updateGrid()
{
float spatialHashGridElementWidth = screenWidth / SPATIAL_HASH_GRID_COLUMNS;
float spatialHashGridElementHeight = screenHeight / SPATIAL_HASH_GRID_ROWS;
for(SpatialHashGridElement e : spatialHashGrid)
e.gameObjects.clear();
for(GameObject go : displayList)
{
for(int i = 0; i < go.vertices.length/3; i++)
{
int row = (int) Math.abs(((go.vertices[i*3 + 1] / spatialHashGridElementHeight) % SPATIAL_HASH_GRID_ROWS));
int col = (int) Math.abs(((go.vertices[i*3 + 0] / spatialHashGridElementWidth) % SPATIAL_HASH_GRID_COLUMNS));
if(!spatialHashGrid[row * SPATIAL_HASH_GRID_COLUMNS + col].gameObjects.contains(go))
spatialHashGrid[row * SPATIAL_HASH_GRID_COLUMNS + col].gameObjects.add(go);
}
}
}
The code isn't probably of the highest quality so if you spot anything to improve please don't hesitate to tell me but the most worrying problem that arises currently is that in 2 grid cells there might be same collision pairs checked. Worst case example (assuming none of the objects spans more than 2 cells):
Here we have 2 gameObjects colliding(red and blue). Each of them resides in 4 cells => therefore in each cell there will be the same pair to check.
I can't come up with some efficient approach to remove the possibility of duplicate pairs without a need to filter the grid after creating it in updateGrid(). Is there some brilliant way to detect that some collision pair has been already inserted even during the updateGrid function? I will be very grateful for any tips!
I'm trying to explain my idea using some pseudo-code (C# language elements):
public partial class GameObject {
// ...
Set<GameObject> collidedSinceLastTick = new HashSet<GameObject>();
public boolean collidesWith(GameObject other) {
if (collidedSinceLastTick.contains(other)) {
return true; // or even false, see below
}
boolean collided = false;
// TODO: your costly logic here
if (collided) {
collidedSinceLastTick.add(other);
// maybe return false if other actions depend on a GameObject just colliding once per tick
}
return collided;
}
// ...
}
HashSet and .hashCode() both can be tuned in some cases. Maybe you could even remove displayList and "hold" everything in spatialHashGrid to reduce the memory foot-print a little bit. Of course do that only if you don't need special access to displayList - in XML's DocumentObjectModel objects can be accessed by a path throught the tree, and "hot spots" can be accessed by ID where the ID has to be assigned explicitely. For serializing (saving game state or whatever) it should not be an issue to iterate through spatialHashGrid performance-wise (it's a bit slower than serializing the gameObject set because you may have to suppress duplicates - using Java serialization it even does not save the same object twice using the default settings, saving just a reference after the first occurence of an object).

Which is my best option to process big 2D arrays in a Java app?

I'm developing a image processing app in Java (Swing), which have lots of calculations.
It crashes when big images are loaded:
java.lang.OutOfMemoryError: Java heap space due things like:
double matrizAdj[][] = new double[18658][18658];
So I'm decided to experiment a light, and faster as possible, database to deal with this problem. Thinking to use a table as it were a 2D array, loop throught it insert resulting values into other table.
I'm also thinking about using JNI, but as I'm not familiarized with C/C++ and I don't have the time needed to learn.
Currently, my problem is not processing, only heap overload.
I would like to hear what is my best option to solve this.
EDIT :
Little explanation: First I get all white pixels from a binarized image into a list. Lets say I got 18k pixels. Then I perform a lot of operations with that list. Like variance, standard deviation, covariance... and goes on... At the end I have to multiply two 2D array([2][18000] & [18000][2]) resulting in a double[18000][18000] that is causing me trouble. After that, other operations are done with this 2D array, resulting in more than one big 2D array.
I can't deal with requiring large ammounts of RAM to use this app.
Well, for trivia's sake, that matrix you're showing consumes roughly 2.6Gb of RAM. So, that's a benchmark of how much memory you need should you decided to pursue that tact.
If it's efficient for you, you could store the rows of the matrix in to blobs within a database. In this case you'd have 18658 rows, with a serialized double[18658] store on it.
I wouldn't suggest that though.
A better tact would be to use the image file directly, and look at NIO and byte buffers to use mmap to map them in to your program space.
Then you can use things like DoubleBuffers to access the data. This lets the VM page in as much of the original file is necessary, and it also keeps the data off the Java heap (rather it's stored in process RAM associated with the JVM). The big benefit is that it keeps these monster data structures away from the Garbage Collector.
You'll still need physical RAM on the machine, of course, but it's not Java Heap RAM.
But this is would likely be the most efficient way to access this data for your process.
Since you stated you "can't deal with requiring large ammounts of RAM to use this app" your only option is to store the big array off RAM - disk being the most obvious choice (using a relational database is just an unnecessary overhead).
You can use a little utility class which provides a persistent 2-dimensional double array functionality. Here is my solution to that using RandomAccessFile. This solution also has the advantage that you can keep the array and reuse it when you restart the application!
Note: the presented solution is not thread-safe. Synchronization needed if you want to access it from multiple threads concurrently.
Persistent 2-dimensional double array:
public class FileDoubleMatrix implements Closeable {
private final int rows;
private final int cols;
private final long rowSize;
private final RandomAccessFile raf;
public FileDoubleMatrix(File f, int rows, int cols) throws IOException {
if (rows < 0 || cols < 0)
throw new IllegalArgumentException(
"Rows and cols cannot be negative!");
this.rows = rows;
this.cols = cols;
rowSize = cols * 8;
raf = new RandomAccessFile(f, "rw");
raf.setLength(rowSize * cols);
}
/**
* Absolute get method.
*/
public double get(int row, int col) throws IOException {
pos(row, col);
return get();
}
/**
* Absolute set method.
*/
public void set(int row, int col, double value) throws IOException {
pos(row, col);
set(value);
}
public void pos(int row, int col) throws IOException {
if (row < 0 || col < 0 || row >= rows || col >= cols)
throw new IllegalArgumentException("Invalid row or col!");
raf.seek(row * rowSize + col * 8);
}
/**
* Relative get method. Useful if you want to go though the whole array or
* though a continuous part, use {#link #pos(int, int)} to position.
*/
public double get() throws IOException {
return raf.readDouble();
}
/**
* Relative set method. Useful if you want to go though the whole array or
* though a continuous part, use {#link #pos(int, int)} to position.
*/
public void set(double value) throws IOException {
raf.writeDouble(value);
}
public int getRows() { return rows; }
public int getCols() { return cols; }
#Override
public void close() throws IOException {
raf.close();
}
}
The presented FileDoubleMatrix supports relative get() and set() methods which is very useful if you process your whole array or a continuous part of it (e.g. you iterate over it). Use the relative methods when you can for faster operations.
Example using the FileDoubleMatrix:
final int rows = 10;
final int cols = 10;
try (FileDoubleMatrix arr = new FileDoubleMatrix(
new File("array.dat"), rows, cols)) {
System.out.println("BEFORE:");
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
System.out.print(arr.get(row, col) + " ");
}
System.out.println();
}
// Process array; here we increment the values
for (int row = 0; row < rows; row++)
for (int col = 0; col < cols; col++)
arr.set(row, col, arr.get(row, col) + (row * cols + col));
System.out.println("\nAFTER:");
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++)
System.out.print(arr.get(row, col) + " ");
System.out.println();
}
} catch (IOException e) {
e.printStackTrace();
}
More about the relative get and set methods:
The absolute get and set methods require the position (row and column) of the element to be returned or set. The relative get and set methods do not require the position, they return or set the current element. The current element is in fact the pointer of the underlying file. The position can be set with the pos() method.
Whenever a relative get() or set() method is called, after returning they implicitly move the pointer to the next element, in a row-continuity manner (moving to the next in the row, and if the end of row reached, moving to the first element of the next row etc.)
For example here is how we can zero the whole array using the relative set method:
// Fill the whole array with zeros using relative set
// First position to the beginning:
arr.pos(0, 0);
// And execute a "set zero" operation
// as many times as many elements the array has:
for ( int i = rows * cols; i > 0; i--)
arr.set(0);
The relative get and set methods automatically move the pointer to the next element.
It should be obvious that in my implementation the absolute get and set methods also change the pointer which must not be forgotten when relative and absolute get/set methods are used.
Another example: let's set the sum of each row to the last element of the row, but also include the last element in the sum! For this we will use the mixture of realtive and absolute get/set methods:
// Start with the first row:
arr.pos(0, 0);
for (int row = 0; row < rows; row++) {
double sum = 0;
for (int col = 0; col < cols; col++)
sum += arr.get(); // Relative get to calculate row sum
// Now set the sum to the end of row.
// For this we have to position back, so we use the absolute set.
arr.set(row, cols - 1, sum);
// The absolute set method also moves the pointer, and since
// it is the end of row, it moves to the first of the next row.
}
And that's all. Using the relative get/set methods we don't have to pass the "matrix indices" when processing continuous parts of the array, and also the implementation does not have to move the internal pointer which is more than handy when processing millions of elements as in your example.
I would recommend the following things in order.
Investigate why your app is running out of memory. Are you creating arrays or other objects bigger than what you need. I hope you might have done that already. But still I thought it's worth mentioning because this should not be ignored.
If you think there is nothing wrong with step 1 then check you are not running with too low memory settings. or 32 bit jvm
If there is no issue with step 2. Now it's not always true that a light weight database will give you best performance. If you don't require searching the temp data may be you won't gain much from implementing a light weight database. But if your application needs lot of searching / querying the temp data it may be different case. If you don't need searching custom file format may be fast and efficient.
I hope it helps you solve the issue at hand :)
The simplest fix would be simply to give your program more memory. For example, if you specify -xmx 11G on your Java command line, the JVM will be able to allocate up to 11 GB of heap space - enough memory to carry several copies of your array, which is around 2.6 GB in size, in memory at a time.
If speed is really not an issue, you can do this even if you don't have enough physical memory, by allocating enough virtual memory and letting the OS swap the memory to disk.
I personally also think this is the best solution. Memory on this scale is cheaper than programmer time.
I would suggest a different approach.
Since most image processing operations are done by going over all of the pixels in some order exactly once, it's usually possible to do them on one piece of the image at a time. What I mean is that there's usually no random access to pixels of the image. If I'm not mistaking, all of the operations you mention in your question fit this description.
Therefore, I would suggest loading the image lazily, a piece at a time. Then, implement methods that retrieve the next chunk of pixels once the previous one is processed, and feeds these chunks to the algorithms you use.
In order to support that, I would suggest converting the images to a non compressed format that you could create a lazy reader for easily.
Not sure I would bother with a database for this, just open a temporary file and spill parts of your matrix in there as needed, and delete the file when you're done. Whatever solution you choose has to depend somewhat on your matrix library being able to use it. If you're using a third party library then you're probably limited to whatever options (if any) they provide. However if you've implemented your own matrix operations then definitely would just go with a temporary file that I manage myself. That will be fastest and lightest weight.
You can use split and reduce technique.
split your image into small fragments, or you can use sliding window technique
http://forums.ni.com/t5/Machine-Vision/sliding-window-technique/td-p/2586621
cheers,

for-loop very slow on Android device

I just ran into an issue while trying to write an bitmap-manipulating algo for an android device.
I have a 1680x128 pixel Bitmap and need to apply a filter on it. But this very simple code-piece actually took almost 15-20 seconds to run on my Android device (xperia ray with a 1Ghz processor).
So I tried to find the bottleneck and reduced as many code lines as possible and ended up with the loop itself, which took almost the same time to run.
for (int j = 0; j < 128; j++) {
for (int i = 0; i < 1680; i++) {
Double test = Math.random();
}
}
Is it normal for such a device taking so much time in a simple for-loop with no difficult operations?
I'm very new to programming on mobile devices so please excuse if this question may be stupid.
UPDATE: Got it faster now with some simpler operations.
But back to my main problem:
public static void filterImage(Bitmap img, FilterStrategy filter) {
img.prepareToDraw();
int height = img.getHeight();
int width = img.getWidth();
RGB rgb;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
rgb = new RGB(img.getPixel(i, j));
if (filter.isBlack(rgb)) {
img.setPixel(i, j, 0);
} else
img.setPixel(i, j, 0xffffffff);
}
}
return;
}
The code above is what I really need to run faster on the device. (nearly immediate)
Do you see any optimizing potential in it?
RGB is only a class that calculates the red, green and blue value and the filter simply returns true if all three color parts are below 100 or any othe specified value.
Already the loop around img.getPixel(i,j) or setPixel takes 20 or more seconds. Is this such an expensive operation?
It may be because too many Objects of type Double being created.. thus it increase heap size and device starts freezing..
A way around is
double[] arr = new double[128]
for (int j = 0; j < 128; j++) {
for (int i = 0; i < 1680; i++) {
arr[i] = Math.random();
}
}
First of all Stephen C makes a good argument: Try to avoid creating a bunch of RGB-objects.
Second of all, you can make a huge improvement by replacing your relatively expensive calls to getPixel with a single call to getPixels
I made some quick testing and managed to cut to runtime to about 10%. Try it out. This was the code I used:
int[] pixels = new int[height * width];
img.getPixels(pixels, 0, width, 0, 0, width, height);
for(int pixel:pixels) {
// check the pixel
}
There is a disclaimer in the docs below for random that might be affecting performance, try creating an instance yourself rather than using the static version, I have highlighted the performance disclaimer in bold:
Returns a pseudo-random double n, where n >= 0.0 && n < 1.0. This method reuses a single instance of Random. This method is thread-safe because access to the Random is synchronized, but this harms scalability. Applications may find a performance benefit from allocating a Random for each of their threads.
Try creating your own random as a static field of your class to avoid synchronized access:
private static Random random = new Random();
Then use it as follows:
double r = random.nextDouble();
also consider using float (random.nextFloat()) if you do not need double precision.
RGB is only a class that calculates the red, green and blue value and the filter simply returns true if all three color parts are below 100 or any othe specified value.
One problem is that you are creating height * width instances of the RGB class, simply to test whether a single pizel is black. Replace that method with a static method call that takes the pixel to be tested as an argument.
More generally, if you don't know why some piece of code is slow ... profile it. In this case, the profiler would tell you that a significant amount of time is spent in the RGB constructor. And the memory profiler would tell you that large numbers of RGB objects are being created and garbage collected.

Procedural World Generation

I am generating my world (random, infinite and 2d) in sections that are x by y, when I reach the end of x a new section is formed. If in section one I have hills, how can I make it so that in section two those hills will continue? Is there some kind of way that I could make this happen?
So it would look something like this
1221
1 = generated land
2 = non generated land that will fill in the two ones
I get this now:
Is there any way to make this flow better?
This seems like just an algorithm issue. Your generation mechanism needs a start point. On the initial call it would be say 0, on subsequent calls it would be the finishing position of the previous "chunk".
If I was doing this, I'd probably make the height of the next point plus of minus say 0-3 from the previous, using some sort of distribution - e.g. 10% of the time it's +/1 3, 25% of the time it is +/- 2, 25% of the time it is 0 and 40% of the time it is +/- 1.
If I understood your problem correctly, here is a solution:
If you generated the delta (difference) between the hills and capped at a fixed value (so changes are never too big), then you can carry over the value of the last hill from the previous section when generating the new one and apply the first randomly genenarted delta (of the new section) to the carried-over hill size.
If you're generating these "hills" sequentially, I would create an accessor method that provides the continuation of said hill with a value to begin the next section. It seems that you are creating a random height for the hill to be constrained by some value already when drawing a hill in a single section. Extend that functionality with this new accessor method.
My take on a possible implementation of this.
public class DrawHillSection {
private int index;
private int x[50];
public void drawHillSection() {
for( int i = 0; i < 50; i++) {
if (i == 0) {
getPreviousHillSectionHeight(index - 1)
}
else {
...
// Your current implementation to create random
// height with some delta-y limit.
...
}
}
}
public void getPreviousHillSectionHeight(int index)
{
return (x[49].height);
}
}

Categories