I need to just have a panel inside of which i'd be able to draw. I want to be able to draw pixel by pixel.
ps: I don't need lines/circles other primitives.
pps: the graphics library does not really matter, it can be awt, swing, qt.. anything. I just want to have something that is usually represented by Bufferedimage or somethign like that where you set colors of single pixels and then render it to the screen.
An example of one way to do it:
// Create the new image needed
img = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB );
for ( int rc = 0; rc < height; rc++ ) {
for ( int cc = 0; cc < width; cc++ ) {
// Set the pixel colour of the image n.b. x = cc, y = rc
img.setRGB(cc, rc, Color.BLACK.getRGB() );
}//for cols
}//for rows
and then from within overridden paintComponent(Graphics g)
((Graphics2D)g).drawImage(img, <args>)
represented by Bufferedimage ..
I suggest a BufferedImage for that, displayed..
..or something like that where you set colors of single pixels and then render it to the screen.
..in a JLabel - as seen in this answer.
Of course, once we have an instance of BufferedImage, we can setRGB(..).
If you honestly need to render pixel-by-pixel, I have done this at-length for hotspot visualization piece of software I wrote for a research lab.
What you want is BufferedImage.setRGB(..) -- if you are drawing pixel-by-pixel, I assume you have implemented an algorithm that will render the RGB values for each pixel (much like we did with the heat-maps). This is what we used in an old IE-compatible Applet back in the day. Worked like a charm and was relatively fast given what it was doing.
Unfortunately any time you manipulate the RGB values directly in a BufferedImage, it will become uncached by the backing video memory.
Since Java 7 though, I heard that the underlying J2D implementation will make an attempt at re-caching the image into video memory once the manipulations stop and rendering is done over-and-over again -- for example, while you are rendering the heat map it is not accelerated, but once it is rendered, as you drag the window around and work with the app, the backing image data can become re-accelerated.
If you want to do something quickly, you can just use the Graphics methods setColor and drawLine. For example:
public void paintComponent(Graphics g) {
super.paintComponent(g);
// Set the colour of pixel (x=1, y=2) to black
g.setColor(Color.BLACK);
g.drawLine(1, 2, 1, 2);
}
I have used this technique and it wasn't terribly slow. I haven't compared it to using BufferedImage objects.
A little late here, but you could always do it the way Java game programmers do, with a Screen class:
public class Screen {
private int width, height;
public int[] pixels;
public Screen(int width, int height) {
this.width = width;
this.height = height;
pixels = new int[width * height];
}
public void render() {
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
pixels[x + y * width] = 0xFFFFFF; //make every pixel white
}
}
}
public void clear() {
for(int i = 0; i < pixels.length; i++) {
pixels[i] = 0; //make every pixel black
}
}
}
And then in your main class:
private Screen screen;
private BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
private int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
public void render() {
BufferStrategy bs = getBufferStrategy();
if (bs == null) {
createBufferStrategy(3);
return;
}
screen.clear();
screen.render();
for(int i = 0; i < pixels.length; i++) {
pixels[i] = screen.pixels[i];
}
Graphics g = bs.getDrawGraphics();
g.drawImage(image, 0, 0, getWidth(), getHeight(), null);
g.dispose();
bs.show();
}
That should work, I think.
I'm coming with a solution which is fast and yet compatible with Graphics2D, in the sense that it doesn't draw from a detached pixel array.
fun drawLand(area: Rectangle): BufferedImage {
val height = area.height
val image = BufferedImage(area.width, height, BufferedImage.TYPE_INT_ARGB)
val g2 = image.createGraphics()
g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)
g2.setRenderingHint(RenderingHints.KEY_RENDERING, RenderingHints.VALUE_RENDER_QUALITY)
g2.background = Color.PINK
g2.clearRect(area.x, area.y, area.width, area.height)
val squares: Sequence<Square> = repo.getSquares(area)
val pixels: IntArray = (image.raster.dataBuffer as DataBufferInt).data
for (square in squares) {
val color = square.color or OPAQUE // 0xFF000000.toInt()
val base = square.location.y * height + square.location.x
pixels[base] = color
}
g2.dispose()
return image
}
Legend:
In the background, the image is backed by an Array of some Java primitives, depending on BufferedImage.TYPE_.
Here I opted for TYPE_INT_ARGB, so each pixel is conveniently an Int. Then you can go row by row and tamper with the underlying pixels.
Related
The border needs to be made out of the closest pixel of the given image, I saw some code online and came up with the following. What am I doing wrong? I'm new to java, and I am not allowed to use any methods.
/**
* TODO Method to be done. It contains some code that has to be changed
*
* #param enlargeFactorPercentage the border in percentage
* #param dimAvg the radius in pixels to get the average colour
* of each pixel for the border
*
* #return a new image extended with borders
*/
public static BufferedImage addBorders(BufferedImage image, int enlargeFactorPercentage, int dimAvg) {
// TODO method to be done
int height = image.getHeight();
int width = image.getWidth();
System.out.println("Image height = " + height);
System.out.println("Image width = " + width);
// create new image
BufferedImage bi = new BufferedImage(width, height, image.getType());
// copy image
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
bi.setRGB(x, y, pixelRGB);
}
}
// draw top and bottom borders
// draw left and right borders
// draw corners
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
for (enlargeFactorPercentage = 0; enlargeFactorPercentage < 10; enlargeFactorPercentage++){
bi.setRGB(width, enlargeFactorPercentage, pixelRGB * dimAvg);
bi.setRGB(enlargeFactorPercentage, height, pixelRGB * dimAvg);
}
}
}
return bi;
I am not allowed to use any methods.
What does that mean? How can you write code if you can't use methods from the API?
int enlargeFactorPercentage
What is that for? To me, enlarge means to make bigger. So if you have a factor of 10 and your image is (100, 100), then the new image would be (110, 110), which means the border would be 5 pixels?
Your code is creating the BufferedImage the same size as the original image. So does that mean you make the border 5 pixels and chop off 5 pixels from the original image?
Without proper requirements we can't help.
#return a new image extended with borders
Since you also have a comment that says "extended", I'm going to assume your requirement is to return the larger image.
So the solution I would use is to:
create the BufferedImage at the increased size
get the Graphics2D object from the BufferImage
fill the entire BufferedImage with the color you want for the border using the Graphics2D.fillRect(….) method
paint the original image onto the enlarged BufferedImage using the Graphics2D.drawImage(…) method.
Hello and welcome to stackoverflow!
Not sure what you mean with "not allowed using methods". Without methods you can not even run a program because the "thing" with public static void main(String[] args) is a method (the main method) and you need it, because it is the program starting point...
But to answer your question:
You have to load your image. A possibility would be to use ImageIO. Then you create a 2D graphics object and then you can to drawRectangle() to create a border rectangle:
BufferedImage bi = //load image
Graphics2D g = bi.getGraphics();
g.drawRectangle(0, 0, bi.getHeight(), bi.getWidth());
This short code is just a hint. Try it out and read the documentation from Bufferedimage see here and from Graphics2D
Edit: Please notice that this is not quite correct. With the code above you overdraw the outer pixel-line from the image. If you don't want to cut any pixel of, then you have to scale it up and draw with bi.getHeight()+2 and bi.getWidth()+2. +2 because you need one pixel more at each side of the image.
What order does PixelGrabber put pixels into the array in java? Does it take the pixels along the width of the image first? Or along the height of the image first?
public static int[] convertImgToPixels(Image img, int width, int height) {
int[] pixel = new int[width * height];
PixelGrabber pixels = new PixelGrabber(img, 0, 0, width, height, pixel, 0, width);
try {
pixels.grabPixels();
} catch (InterruptedException e) {
throw new IllegalStateException("Interrupted Waiting for Pixels");
}
if ((pixels.getStatus() & ImageObserver.ABORT) != 0) {
throw new IllegalStateException("Image Fetch Aborted");
}
return pixel;
}
In the code example provided by the documentation
It has the following for loops:
for (int j = 0; j < h; j++) {
for (int i = 0; i < w; i++) {
handlesinglepixel(x+i, y+j, pixels[j * w + i]);
}
}
The access pixels[j * w + i] shows that it goes first along the row, then by along the columns. It grabs the pixels along the width first.
I'm pretty sure it uses row major order, but the easiest way is to actually grab the pixels, set a sequence of them to a particular color (for easy identification) and then save them out to an image. If the pixel strip appears vertical than the order is column major, otherwise it is row major. You can use code like: here
public static Image getImageFromArray(int[] pixels, int width, int height) {
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0,0,width,height,pixels);
return image;
}
To convert an the int[] to an image.
Also, I use ((DataBufferInt)img.grtRaster().getDataBuffer()).getData() to quickly grab the pixels of the image. Any modifications to that int[] will reflect in the image and vice versa. And that is row major for sure.
I'm wondering how a person could change the alpha transparency of a color, if given the hex color code. For example if given
Color.red.getRGB()
how could I change it's alpha to 0x80?
To put this in context, I'm working on a static method to tint a BufferedImage, by creating a graphics device from the given image, and rendering a half transparent mask with that, disposing of the graphics, and returning the image. It works, but you have to define the alpha yourself in the given hex color code. I want to give a Color object, and double between 0 and 1.0 to determine the intensity of the tinting. Here's my code so far:
public static Image tintImage(Image loadImg, int color) {
Image gImage = loadImg;
Graphics2D g = gImage.image.createGraphics();
Image image = new Image(new BufferedImage(loadImg.width, loadImg.height, BufferedImage.TYPE_INT_ARGB));
for(int x = 0; x < loadImg.width; x++) {
for(int y = 0; y < loadImg.height; y++) {
if(loadImg.image.getRGB(x, y) >> 24 != 0x00) {
image.image.setRGB(x, y, color);
}
}
}
g.drawImage(image.image, 0, 0, null);
g.dispose();
return gImage;
}
You can construct a new Color from the old one with the lower alpha.
Color cNew = new Color(cOld.getRed(), cOld.getGreen(), cOld.getBlue(), 0x80);
Using the Color(int r, int g, int b, int a) constructor.
I have been working with the polygon class and trying to set the pixel values inside of the polygon to transparent or remove them all together if this is possible, however I have hit a bit of a wall as I am trying to store the values as RGB int values and don't know how I would be able to make a pixel transparent/removed via this method.
Additionally to this I would also like to do the same thing but keeping pixels inside the polygon and deleting those outside if possible in order to be left with only the pixels contained within the polygon. I have searched around for this before but to no avail.
I did attempt to create a SSCCE for this to make it easier to work with and view for anyone taking the time to help however as its part of a much larger programme that I am working on creating one is proving to take some time, however once I have one working to better demonstrate this problem I will edit this post.
Thank you to anyone for taking the time to help me with this problem
Below I have some code for what I am currently using to segment the pixels that are contained within an already specified polygon. This is extremely similar to the way i do it for setting pixels outside the polygon to transparent only with the if statement arguments swapped around to remove a segment of the image and haveing a return for newImage rather than save image stuff and it works perfectly, however when I do it this way to save the pixels contained in the polygon it doesn't save for some reason.
public void saveSegment(int tabNum, BufferedImage img) {
segmentation = new GUI.Segmentation();
Polygon p = new Polygon();
Color pixel;
p = createPolygon(segmentation);
int height = img.getHeight();
int width = img.getWidth();
newImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//loop through the image to fill the 2d array up with the segmented pixels
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
//If the pixel is inside polygon
if(p.contains(x, y) == true) {
pixel = new Color(img.getRGB(x, y));
//set pixel equal to the RGB value of the pixel being looked at
int r = pixel.getRed(); // red component 0...255
int g = pixel.getGreen(); // green component 0...255
int b = pixel.getBlue(); // blue component 0...255
int a = pixel.getAlpha(); // alpha (transparency) component 0...255
int col = (a << 24) | (r << 16) | (g << 8) | b;
newImage.setRGB(x, y, col);
}
else {
pixel = new Color(img.getRGB(x, y));
int a = 0; // alpha (transparency) component 0...255
int col = (a << 24);
newImage.setRGB(x, y, col);
}
}
}
try {
//then save as image once all in correct order
ImageIO.write(newImage, "bmp", new File("saved-Segment.bmp"));
JOptionPane.showMessageDialog(null, "New image saved successfully");
} catch (IOException e) {
e.printStackTrace();
}
}
An easier way is to use Java2D's clipping capability:
BufferedImage cutHole(BufferedImage image, Polygon holeShape) {
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(), image.getType());
Graphics2D g = newImage.createGraphics();
Rectangle entireImage =
new Rectangle(image.getWidth(), image.getHeight());
Area clip = new Area(entireImage);
clip.subtract(new Area(holeShape));
g.clip(clip);
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
BufferedImage clipToPolygon(BufferedImage image, Polygon polygon) {
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(), image.getType());
Graphics2D g = newImage.createGraphics();
g.clip(polygon);
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
I'm attempting to take a picture as input, then manipulate said picture (I specifically want to make it greyscale) and then output the new image. This is a snippet of the code that I'm editing in order to do so but I'm getting stuck. Any ideas of what I can change/do next. Greatly appreciated!
public boolean recieveFrame (Image frame) {
int width = frame.width();
int height = frame.height();
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
Color c1 = frame.get(i, j);
double greyScale = (double) ((Color.red *.3) + (Color.green *.59) + (Color.blue * .11));
Color newGrey = Color.greyScale(greyScale);
frame.set(i, j, newGrey);
}
}
boolean shouldStop = displayImage(frame);
return shouldStop;
}
I'm going to try to stick as close as possible to what you already have. So, I'll assume that you are looking for how to do pixel-level processing on an Image, rather than just looking for a technique that happens to work for converting to greyscale.
The first step is that you need the image to be a BufferedImage. This is what you get by default from ImageIO, but if you have some other type of image, you can create a BufferedImage and paint the other image into it first:
BufferedImage buffer = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
Graphics2D g = buffer.createGraphics();
g.drawImage(image, 0, 0);
g.dispose()
Then, you can operate on the pixels like this:
public void makeGrey(BufferedImage image) {
for(int x = 0; x < image.getWidth(); ++x) {
for(int y = 0; y < image.getHeight(); ++y) {
Color c1 = new Color(image.getRGB(x, y));
int grey = (int)(c1.getRed() * 0.3
+ c1.getGreen() * 0.59
+ c1.getBlue() * .11
+ .5);
Color newGrey = new Color(grey, grey, grey);
image.setRGB(x, y, newGrey.getRGB());
}
}
}
Note that this code is horribly slow. A much faster option is to extract all the pixels from the BufferedImage into an int[], operate on that, and then set it back into the image. This uses the other versions of the setRGB()/getRGB() methods that you'll find in the javadoc.