Faster way to redraw a bitmap in android? - java

I'm processing an image for OCR, and I need to change the color of a bunch of pixels in a bitmap. Is there a faster method than setPixel and getPixel? Currently I'm just doing a for loop with a method that checks the input vs a bunch of RGB values and returns true if there's a match. Thank you very much!
Bitmap img = myImage.copy(Bitmap.Config.ARGB_8888, true);;
for (int w = 0; w < img.getWidth(); w++) {
for (int h = 0; h < img.getHeight(); h++) {
int value = img.getPixel(w, h);
if (filter(value)) {
img.setPixel(w, h, Color.rgb(0, 0, 0));
} else img.setPixel(w, h, Color.rgb(255, 255, 255));
}
}

Just wanted to follow up and post code for if anyone finds this via search.
My code in the question was taking 3.4s on average, the code below using Gabe's advice is taking under 200ms. I also added a length variable because I read not calling array.length() every iteration could improve performance, but in testing there doesn't seem to be much of a benefit
int width = img.getWidth();
int height = img.getHeight();
int length = width * height;
int[] pixels = new int[width*height];
img.getPixels(pixels, 0, width, 0, 0, width, height);
for (int i = 0; i < length; i++) {
if (filter(pixels[i])) {
pixels[i] = Color.rgb(0, 0, 0);
} else pixels[i] = Color.rgb(255, 255, 255);
}
img.setPixels(pixels, 0, width, 0, 0, width, height);

You could always try to break up your image into quadrants/ a grid and assign each quadrant a separate thread.

I normally handle images with OpenCV on Jni. It is much faster than handling on Java. If you are not familliar with JNI, please reference this link. What is JNI Graphics or how to use it?
You can also divide image and then process with multi-threading!

Related

Fast way to convert BufferedImage to byte array, and add alpha

I'm using the following to add overlay to a video; but it doesn't perform well because it requires to process every pixels (which is A LOT of work if you have full-HD at 50 frames/sec!)
private static byte[] convert(BufferedImage image, int opacity)
{
int imgWidth = image.getWidth();
int imgHeight = image.getHeight();
byte[] buffer = new byte[imgWidth * imgHeight * 4]
int imgLoc = 0;
for(int y=0; y < imgHeight; y++)
{
for(int x=0; x < imgWidth; x++)
{
int argb = image.getRGB(x, y);
imgBuffer[(imgLoc*4)+0] = (byte)((argb>>0)&0x0FF);
imgBuffer[(imgLoc*4)+1] = (byte)((argb>>8)&0x0FF);
imgBuffer[(imgLoc*4)+2] = (byte)((argb>>16)&0x0FF);
int alpha = ((argb>>24)&0x0FF);
if (opacity < 100)
alpha = (alpha*opacity)/100;
imgBuffer[(imgLoc*4)+3] = (byte)alpha;
imgLoc++;
}
}
How can I re-write this code to perform better? I've tried many different things, e.g.:
image.getRGB(0, 0, image.getWidth(), image.getHeight(), null, 0, image.getWidth());
((DataBufferByte) image.getData().getDataBuffer()).getData()
But none of these seem to work; the overlay doesn't show up, while the slower method at least shows the overlay.
PS: I know the ARGB is converted to ABGR in the function above; I'll fix that later. The overlay should still show, albeit in different colors

LibGDX - get currently rendered screen as a matrix of RGB values

In the middle of a game, I'd like to have access to the pixels being currently displayed on the screen as a matrix (or really several matrices) of RGB values. Is there an easy command to access this?
You can use the code from [official LibGDX wiki[(https://github.com/libgdx/libgdx/wiki/Taking-a-Screenshot)
byte[] pixels = ScreenUtils.getFrameBufferPixels(0, 0, Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), true);
Pixmap pixmap = new Pixmap(Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), Pixmap.Format.RGBA8888);
BufferUtils.copy(pixels, 0, pixmap.getPixels(), pixels.length);
//Your logic here
pixmap.dispose();
Then you can get desired pixel by using Pixmap method:
getPixel(int x, int y)
or just iterate over all pixels by using loop as following
for(int w = 0; w < pixmap.getWidth(); w++)
for(int h = 0; h < pixmap.getHeight(); h++)
getPixel(w, h);
Remember that pixmap needs to be disposed. List of objects that need to be dispose you will find here

What order does PixelGrabber put pixels into the array in java?

What order does PixelGrabber put pixels into the array in java? Does it take the pixels along the width of the image first? Or along the height of the image first?
public static int[] convertImgToPixels(Image img, int width, int height) {
int[] pixel = new int[width * height];
PixelGrabber pixels = new PixelGrabber(img, 0, 0, width, height, pixel, 0, width);
try {
pixels.grabPixels();
} catch (InterruptedException e) {
throw new IllegalStateException("Interrupted Waiting for Pixels");
}
if ((pixels.getStatus() & ImageObserver.ABORT) != 0) {
throw new IllegalStateException("Image Fetch Aborted");
}
return pixel;
}
In the code example provided by the documentation
It has the following for loops:
for (int j = 0; j < h; j++) {
for (int i = 0; i < w; i++) {
handlesinglepixel(x+i, y+j, pixels[j * w + i]);
}
}
The access pixels[j * w + i] shows that it goes first along the row, then by along the columns. It grabs the pixels along the width first.
I'm pretty sure it uses row major order, but the easiest way is to actually grab the pixels, set a sequence of them to a particular color (for easy identification) and then save them out to an image. If the pixel strip appears vertical than the order is column major, otherwise it is row major. You can use code like: here
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
To convert an the int[] to an image.
Also, I use ((DataBufferInt)img.grtRaster().getDataBuffer()).getData() to quickly grab the pixels of the image. Any modifications to that int[] will reflect in the image and vice versa. And that is row major for sure.

Java: basic plotting, drawing a point/dot/pixel

I need to just have a panel inside of which i'd be able to draw. I want to be able to draw pixel by pixel.
ps: I don't need lines/circles other primitives.
pps: the graphics library does not really matter, it can be awt, swing, qt.. anything. I just want to have something that is usually represented by Bufferedimage or somethign like that where you set colors of single pixels and then render it to the screen.
An example of one way to do it:
// Create the new image needed
img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB );
for ( int rc = 0; rc < height; rc++ ) {
for ( int cc = 0; cc < width; cc++ ) {
// Set the pixel colour of the image n.b. x = cc, y = rc
img.setRGB(cc, rc, Color.BLACK.getRGB() );
}//for cols
}//for rows
and then from within overridden paintComponent(Graphics g)
((Graphics2D)g).drawImage(img, <args>)
represented by Bufferedimage ..
I suggest a BufferedImage for that, displayed..
..or something like that where you set colors of single pixels and then render it to the screen.
..in a JLabel - as seen in this answer.
Of course, once we have an instance of BufferedImage, we can setRGB(..).
If you honestly need to render pixel-by-pixel, I have done this at-length for hotspot visualization piece of software I wrote for a research lab.
What you want is BufferedImage.setRGB(..) -- if you are drawing pixel-by-pixel, I assume you have implemented an algorithm that will render the RGB values for each pixel (much like we did with the heat-maps). This is what we used in an old IE-compatible Applet back in the day. Worked like a charm and was relatively fast given what it was doing.
Unfortunately any time you manipulate the RGB values directly in a BufferedImage, it will become uncached by the backing video memory.
Since Java 7 though, I heard that the underlying J2D implementation will make an attempt at re-caching the image into video memory once the manipulations stop and rendering is done over-and-over again -- for example, while you are rendering the heat map it is not accelerated, but once it is rendered, as you drag the window around and work with the app, the backing image data can become re-accelerated.
If you want to do something quickly, you can just use the Graphics methods setColor and drawLine. For example:
public void paintComponent(Graphics g) {
super.paintComponent(g);
// Set the colour of pixel (x=1, y=2) to black
g.setColor(Color.BLACK);
g.drawLine(1, 2, 1, 2);
}
I have used this technique and it wasn't terribly slow. I haven't compared it to using BufferedImage objects.
A little late here, but you could always do it the way Java game programmers do, with a Screen class:
public class Screen {
private int width, height;
public int[] pixels;
public Screen(int width, int height) {
this.width = width;
this.height = height;
pixels = new int[width * height];
}
public void render() {
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
pixels[x + y * width] = 0xFFFFFF; //make every pixel white
}
}
}
public void clear() {
for(int i = 0; i < pixels.length; i++) {
pixels[i] = 0; //make every pixel black
}
}
}
And then in your main class:
private Screen screen;
private BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
public void render() {
BufferStrategy bs = getBufferStrategy();
if (bs == null) {
createBufferStrategy(3);
return;
}
screen.clear();
screen.render();
for(int i = 0; i < pixels.length; i++) {
pixels[i] = screen.pixels[i];
}
Graphics g = bs.getDrawGraphics();
g.drawImage(image, 0, 0, getWidth(), getHeight(), null);
g.dispose();
bs.show();
}
That should work, I think.
I'm coming with a solution which is fast and yet compatible with Graphics2D, in the sense that it doesn't draw from a detached pixel array.
fun drawLand(area: Rectangle): BufferedImage {
val height = area.height
val image = BufferedImage(area.width, height, BufferedImage.TYPE_INT_ARGB)
val g2 = image.createGraphics()
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)
g2.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY)
g2.background = Color.PINK
g2.clearRect(area.x, area.y, area.width, area.height)
val squares: Sequence<Square> = repo.getSquares(area)
val pixels: IntArray = (image.raster.dataBuffer as DataBufferInt).data
for (square in squares) {
val color = square.color or OPAQUE // 0xFF000000.toInt()
val base = square.location.y * height + square.location.x
pixels[base] = color
}
g2.dispose()
return image
}
Legend:
In the background, the image is backed by an Array of some Java primitives, depending on BufferedImage.TYPE_.
Here I opted for TYPE_INT_ARGB, so each pixel is conveniently an Int. Then you can go row by row and tamper with the underlying pixels.

Taking a picture as input, Make grey scale and & then outputting

I'm attempting to take a picture as input, then manipulate said picture (I specifically want to make it greyscale) and then output the new image. This is a snippet of the code that I'm editing in order to do so but I'm getting stuck. Any ideas of what I can change/do next. Greatly appreciated!
public boolean recieveFrame (Image frame) {
int width = frame.width();
int height = frame.height();
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
Color c1 = frame.get(i, j);
double greyScale = (double) ((Color.red *.3) + (Color.green *.59) + (Color.blue * .11));
Color newGrey = Color.greyScale(greyScale);
frame.set(i, j, newGrey);
}
}
boolean shouldStop = displayImage(frame);
return shouldStop;
}
I'm going to try to stick as close as possible to what you already have. So, I'll assume that you are looking for how to do pixel-level processing on an Image, rather than just looking for a technique that happens to work for converting to greyscale.
The first step is that you need the image to be a BufferedImage. This is what you get by default from ImageIO, but if you have some other type of image, you can create a BufferedImage and paint the other image into it first:
BufferedImage buffer = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics2D g = buffer.createGraphics();
g.drawImage(image, 0, 0);
g.dispose()
Then, you can operate on the pixels like this:
public void makeGrey(BufferedImage image) {
for(int x = 0; x < image.getWidth(); ++x) {
for(int y = 0; y < image.getHeight(); ++y) {
Color c1 = new Color(image.getRGB(x, y));
int grey = (int)(c1.getRed() * 0.3
+ c1.getGreen() * 0.59
+ c1.getBlue() * .11
+ .5);
Color newGrey = new Color(grey, grey, grey);
image.setRGB(x, y, newGrey.getRGB());
}
}
}
Note that this code is horribly slow. A much faster option is to extract all the pixels from the BufferedImage into an int[], operate on that, and then set it back into the image. This uses the other versions of the setRGB()/getRGB() methods that you'll find in the javadoc.

Categories