I'm trying to get a small section of image on the screen and read any pixel to compare the other pixels.The code to get screen image is:
Rectangle captureSize = new Rectangle(x, y, height, width);
BufferedImage image = robot.createScreenCapture(captureSize);
And, to read pixel by pixel I used
for (int y = 0; y < image.getHeight(); y = y + 1) {
for (int x = 0; x < image.getWidth(); x = x + 1) {
color = image.getRGB(x, y);
// Some methods etc
{
{
However, when I ran it I was shocked. Because createScreenCapture took about 40 ms and using getRGB of each pixel took around 350 ms which is very inefficient to create an application for 60 fps. By the way, my image is 800x400 pixels size. I didn't try
rgbArray = image.getRGB(startX, startY, w, h, rgbArray,offset, scansize) ;
method because I don't know how efficient it is and to reorder my code would be a bit difficult. So, any help would be appreciated.
Use
rgbArray = image.getRGB(startX, startY, w, h, rgbArray,offset, scansize) ;
It will be much faster to read the pixel values from the array than to do the method call to get each pixel value, and the single call to getRGB to fetch the array is not slow.
Related
Is there any reliable way of cropping surrounding white space from a PDF page or a BufferedImage in Java, ideally using only open source (Apache or MIT licensed) code?
For example, in a PDF document processed page by page, the algorithm would be
Detect the rectangle surrounding the non-whitespace content (text, tables, images) for each page.
Compare rectangles and chose the largest one (so that all pages/images have a uniform size).
Crop everything out of the largest rectangle in each page (all cropped out content should be whitespace).
The main requirement is to reliably implement (3). Operations directly on PDF pages (e.g., using PDFBox) or on their BufferedImage counterparts are equally fine.
I have posted a "brute force" answer to that, any improvements most welcome. :-)
Here is a brute force answer to the question.
public static BufferedImage trim(BufferedImage image, int rgb) {
int x1 = Integer.MAX_VALUE;
int y1 = Integer.MAX_VALUE;
int x2 = 0;
int y2 = 0;
for (int x = 0; x < image.getWidth(); ++x) {
for (int y = 0; y < image.getHeight(); ++y) {
if (image.getRGB() != rgb) {
x1 = Math.min(x1, x);
y1 = Math.min(y1, y);
x1 = Math.min(x2, x);
y2 = Math.min(y2, y);
}
}
}
WritableRaster raster = image.getRaster().createWritableChild(x1, y1, x2 - x1, y2 - y1, 0, 0, null);
return new BufferedImage(image.getColorModel(), raster, image.getColorModel().isAlphaPremultiplied(), null);
}
The solution above is slow because it goes over all pixels. It also assumes that the edges to be trimmed have a uniform color, the value of which is represented by the rgb parameter (-1 for white). Moreover, it "magnifies" the non-trimmed content, since it actually crops the center (non-trimmed) part of the image.
The border needs to be made out of the closest pixel of the given image, I saw some code online and came up with the following. What am I doing wrong? I'm new to java, and I am not allowed to use any methods.
/**
* TODO Method to be done. It contains some code that has to be changed
*
* #param enlargeFactorPercentage the border in percentage
* #param dimAvg the radius in pixels to get the average colour
* of each pixel for the border
*
* #return a new image extended with borders
*/
public static BufferedImage addBorders(BufferedImage image, int enlargeFactorPercentage, int dimAvg) {
// TODO method to be done
int height = image.getHeight();
int width = image.getWidth();
System.out.println("Image height = " + height);
System.out.println("Image width = " + width);
// create new image
BufferedImage bi = new BufferedImage(width, height, image.getType());
// copy image
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
bi.setRGB(x, y, pixelRGB);
}
}
// draw top and bottom borders
// draw left and right borders
// draw corners
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
for (enlargeFactorPercentage = 0; enlargeFactorPercentage < 10; enlargeFactorPercentage++){
bi.setRGB(width, enlargeFactorPercentage, pixelRGB * dimAvg);
bi.setRGB(enlargeFactorPercentage, height, pixelRGB * dimAvg);
}
}
}
return bi;
I am not allowed to use any methods.
What does that mean? How can you write code if you can't use methods from the API?
int enlargeFactorPercentage
What is that for? To me, enlarge means to make bigger. So if you have a factor of 10 and your image is (100, 100), then the new image would be (110, 110), which means the border would be 5 pixels?
Your code is creating the BufferedImage the same size as the original image. So does that mean you make the border 5 pixels and chop off 5 pixels from the original image?
Without proper requirements we can't help.
#return a new image extended with borders
Since you also have a comment that says "extended", I'm going to assume your requirement is to return the larger image.
So the solution I would use is to:
create the BufferedImage at the increased size
get the Graphics2D object from the BufferImage
fill the entire BufferedImage with the color you want for the border using the Graphics2D.fillRect(….) method
paint the original image onto the enlarged BufferedImage using the Graphics2D.drawImage(…) method.
Hello and welcome to stackoverflow!
Not sure what you mean with "not allowed using methods". Without methods you can not even run a program because the "thing" with public static void main(String[] args) is a method (the main method) and you need it, because it is the program starting point...
But to answer your question:
You have to load your image. A possibility would be to use ImageIO. Then you create a 2D graphics object and then you can to drawRectangle() to create a border rectangle:
BufferedImage bi = //load image
Graphics2D g = bi.getGraphics();
g.drawRectangle(0, 0, bi.getHeight(), bi.getWidth());
This short code is just a hint. Try it out and read the documentation from Bufferedimage see here and from Graphics2D
Edit: Please notice that this is not quite correct. With the code above you overdraw the outer pixel-line from the image. If you don't want to cut any pixel of, then you have to scale it up and draw with bi.getHeight()+2 and bi.getWidth()+2. +2 because you need one pixel more at each side of the image.
In the middle of a game, I'd like to have access to the pixels being currently displayed on the screen as a matrix (or really several matrices) of RGB values. Is there an easy command to access this?
You can use the code from [official LibGDX wiki[(https://github.com/libgdx/libgdx/wiki/Taking-a-Screenshot)
byte[] pixels = ScreenUtils.getFrameBufferPixels(0, 0, Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), true);
Pixmap pixmap = new Pixmap(Gdx.graphics.getBackBufferWidth(), Gdx.graphics.getBackBufferHeight(), Pixmap.Format.RGBA8888);
BufferUtils.copy(pixels, 0, pixmap.getPixels(), pixels.length);
//Your logic here
pixmap.dispose();
Then you can get desired pixel by using Pixmap method:
getPixel(int x, int y)
or just iterate over all pixels by using loop as following
for(int w = 0; w < pixmap.getWidth(); w++)
for(int h = 0; h < pixmap.getHeight(); h++)
getPixel(w, h);
Remember that pixmap needs to be disposed. List of objects that need to be dispose you will find here
I'm looking for a way to find the dimensions of the visible part of an image is. The image I'm displaying in my ImageView is .png format, it has a portion that is "visible" and the rest is an invisible background.
Example Image:
←that box isn't visible on the real image, it's just to illustrate my point
So in this image there is only a small red wedge shape which is visible, but the full .png is really a rectangle of larger dimensions, thus I can't use something like bitmap.getWidth();
So:
How can I find out if a particular pixel in an image is "invisible" or not? Note: I know I can use bitmap.getPixel(x, y); to get a pixel, but I don't know what to do with it once I have it; is a test for 0 sufficient?
Is there a better way of finding the max width/height of the "visible" portion other than iterating through every pixel looking for the visible "end points"?
How can I find out if a particular pixel in an image is "invisible" or not? Note: I know I can use bitmap.getPixel(x, y); to get a pixel, but I don't know what to do with it once I have it; is a test for 0 sufficient?
Use image.getPixel(x, y) != Color.TRANSPARENT to check whether the pixel is visible or not.
Is there a better way of finding the max width/height of the "visible" portion other than iterating through every pixel looking for the visible "end points"?
There is no built in functions. You can use the below function to get the image in square shape leaving the transparent pixels out.
public static Bitmap removeTransparentPixels(Bitmap image) {
int x1 = image.getWidth();
int y1 = image.getHeight();
int width = 0, height = 0;
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
if (image.getPixel(x, y) != Color.TRANSPARENT) {
if (x < x1) {
x1 = x;
} else if (x > width) {
width = x;
}
if (y < y1) {
y1 = y;
} else if (y > height) {
height = y;
}
}
}
}
width = width - x1;
height = height - y1;
return Bitmap.createBitmap(image, x1, y1, width, height);
}
Make sure that your image doesn't contain only the transparent pixels. If the image has only transparent pixels then the statement Bitmap.createBitmap(image, x1, y1, width, height); will through exception.
There are probably some form of image processing libraries that you would need to take advantage of in order to achieve what you are requesting in order to keep the processing down if that is a concern such as OpenCV or ImageMagick that would probably get that information back to you in a quick manner via specific function calls.
As far as I know, there wouldn't be a built in way to determine that through a standard call into the image libraries that exist within Android. You would probably need to do some sort of heuristic check for transparent pixels as you mentioned with your original thought.
I'm using LWJGL and Slick2D for a game I'm making. I can't seem to get it to draw the way I want it to be draw so I came up with an idea just to make my own drawing method. Basically it takes a image, a x, and a y and it goes through each pixel in the image, gets the color, then draws the image with the parameter x plus the x pixel it's on to get the position that the pixel is suppost to be drawn on. Same idea with the y. Although if the alpha channel isn't 255 for the pixel it doesn't draw it, although I'll fix that later. The problem is that whenever I run my code I get "Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: -2044". I'm really confused. I'm hoping someone can figure out why this is happening.
private void DrawImage(Image image, int xP, int yP)
{
//xP And yP Are The Position Parameters
//Begin Drawing Individual Pixels
glBegin(GL_POINTS);
//Going Across The X And The Y Coords Of The Image
for (int x = 1; x <= image.getWidth(); x++)
{
for (int y = 1; y <= image.getHeight(); y++)
{
//Define A Color Object
Color color = null;
//Set The Color Object And Check If The Color Is Completly Solid Before Rendering
if ((color = image.getColor(x, y)).a == 255)
{
//Bind The Color
color.bind();
//Draw The Color At The Coord Parameters And The X/Y Coord Of The Individual Pixel
glVertex2i(xP + x - 1, yP + y - 1);
}
}
}
glEnd();
}
My answer is assuming that the texture is an array of data.
I have a feeling it is the getColor() method. Your for loop runs through and will use the height and width values. An array usually starts off with 0 and width and height are just array counts typically. So I can see when you reach HEIGHT, that the texture array will throw an exception.
Try removing the <= part and replace it with <
EXAMPLE:
for (int x = 1; x < image.getWidth(); x++)
It may also help you to start off with zero so you can get the entire image.
EXAMPLE
for (int x = 0; x < image.getWidth(); x++)
Here is a link on arrays.
This way, when you ask for the color at whatever position, it will never ask for a color reaching beyond what is in the texture array. Hopefully I made sense.