I am clearly missing an important concept here. I have written code using mouse events to draw a boundary (a polygon) on an existing BufferedImage. Here is the relevant section:
public void paintComponent(Graphics g)
{
super.paintComponent(g); //Paint parent's background
//G3 displays the BufferedImage "Drawing" with each paint
Graphics2D G3 = (Graphics2D)g;
G3.drawImage(this.Drawing, 0, 0, null);
G3.dispose();
}
public void updateDrawing()
{
int x0, y0, x1, y1; // Vertex coordinates
Line2D.Float seg;
// grafix is painting the mouse drawing to the BufferedImage "Drawing"
if(this.pts.size() > 0)
{
for(int ip = 0; ip < pts.size(); ip++)
{
x0 = (int)this.pts.get(ip).x;
y0 = (int)this.pts.get(ip).y;
this.grafix.drawRect(x0 - this.sqw/2, y0 - this.sqh/2, + this.sqw, this.sqh);
if (ip > 0)
{
x1 = (int)this.pts.get(ip-1).x;
y1 = (int)this.pts.get(ip-1).y;
this.grafix.drawLine(x1, y1, x0, y0);
seg = new Line2D.Float(x1, y1, x0, y0);
this.segments.add(seg);
}
}
}
repaint();
}
The next two routines are called by the mouse events: Left click gets the next point and right click closes the region.
public void getNextPoint(Point2D p)
{
this.isDrawing = true;
Point2D.Float next = new Point2D.Float();
next.x = (float) p.getX();
next.y = (float) p.getY();
this.pts.add(next);
updateDrawing();
}
public void closeBoundary()
{
//Connects the last point to the first point to close the loop
Point2D.Float next = new Point2D.Float(this.pts.get(0).x, this.pts.get(0).y);
this.pts.add(next);
this.isDrawing = false;
updateDrawing();
}
It all works fine and I can save the image with my drawing on it:
image with drawing
The list of vertices (pts) and the line segments (segments) are all that describe the region/shape/polygon.
I wish to extract from the original image only that region enclosed within the boundary. That is, I plan to create a new BufferedImage by moving through all of the pixels, testing to see if they fall within the figure and keep them if they do.
So I want to create an AREA from the points and segments I've collected in drawing the shape. Everything says: create an AREA variable and "getPathIterator". But on what shape? My AREA variable will be empty. How does the path iterator access the points in my list?
I've been all over the literature and this website as well.
I'm missing something.
Thank you haraldK for your suggestion. Before I saw your post, I came to a similar conclusion:
Using the Arraylist of vertices from the paint operation, I populated a "Path2D.Float" object called "contour" by looping through the points list that was created during the "painting" operation. Using this "contour" object, I instantiated an Area called "interferogram". Just to check my work, I created another PathIterator, "PI", from the Area and decomposed the Area, "interferogram" into "segments" sending the results to the console. I show the code below:
private void mnuitmKeepInsideActionPerformed(java.awt.event.ActionEvent evt)
{
// Keeps the inner area of interest
// Vertices is the "pts" list from Class MouseDrawing (mask)
// It is already a closed path
ArrayList<Point2D.Float> vertices =
new ArrayList<>(this.mask.getVertices());
this.contour = new Path2D.Float(Path2D.WIND_NON_ZERO);
// Read the vertices into the Path2D variable "contour"
this.contour.moveTo((float)vertices.get(0).getX(),
(float)vertices.get(0).getY()); //Starting location
for(int ivertex = 1; ivertex < vertices.size(); ivertex++)
{
this.contour.lineTo((float)vertices.get(ivertex).getX(),
(float)vertices.get(ivertex).getY());
}
this.interferogram = new Area(this.contour);
PathIterator PI = this.interferogram.getPathIterator(null);
//Test print out the segment types and vertices for debug
float[] p = new float[6];
int icount = 0;
while( !PI.isDone())
{
int type = PI.currentSegment(p);
System.out.print(icount);
System.out.print(" Type " + type);
System.out.print(" X " + p[0]);
System.out.println(" Y " + p[1]);
icount++;
PI.next();
}
BufferedImage masked = Mask(this.image_out, this.interferogram);
// Write image to file for debug
String dir;
dir = System.getProperty("user.dir");
dir = dir + "\\00masked.png";
writeImage(masked, dir, "PNG");
}
Next, I applied the mask to the image testing each pixel for inclusion in the area using the code below:
public BufferedImage Mask(BufferedImage BIM, Area area)
{
/** Loop through the pixels in the image and test each one for inclusion
* within the area.
* Change the colors of those outside
**/
Point2D p = new Point2D.Double(0,0);
// rgb should be white
int rgb = (255 << 24);
for (int row = 0; row < BIM.getWidth(); row++)
{
for (int col = 0; col < BIM.getHeight(); col++)
{
p.setLocation(col, row);
if(!area.contains(p))
{
BIM.setRGB(col, row, rgb);
}
}
}
return BIM;
}
public static BufferedImage deepCopy(BufferedImage B2M)
{
ColorModel cm = B2M.getColorModel();
boolean isAlphaPremultiplied = cm.isAlphaPremultiplied();
WritableRaster raster = B2M.copyData(B2M.getRaster()
.createCompatibleWritableRaster());
return new BufferedImage(cm, raster, isAlphaPremultiplied, null);
}
This worked beautifully (I was surprised!) except for one slight detail: the lines of the area appeared around the outside of the masked image.
In order to remedy this, I copied the original (resized) image before the painting operation. Many thanks to user1050755 (Nov 2014) for the routine deepCopy that I found on this website. Applying my mask to the copied image resulted in the portion of the original image I wanted without the mask lines. The result is shown in the attached picture. I am stoked!
masked image
Related
The border needs to be made out of the closest pixel of the given image, I saw some code online and came up with the following. What am I doing wrong? I'm new to java, and I am not allowed to use any methods.
/**
* TODO Method to be done. It contains some code that has to be changed
*
* #param enlargeFactorPercentage the border in percentage
* #param dimAvg the radius in pixels to get the average colour
* of each pixel for the border
*
* #return a new image extended with borders
*/
public static BufferedImage addBorders(BufferedImage image, int enlargeFactorPercentage, int dimAvg) {
// TODO method to be done
int height = image.getHeight();
int width = image.getWidth();
System.out.println("Image height = " + height);
System.out.println("Image width = " + width);
// create new image
BufferedImage bi = new BufferedImage(width, height, image.getType());
// copy image
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
bi.setRGB(x, y, pixelRGB);
}
}
// draw top and bottom borders
// draw left and right borders
// draw corners
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixelRGB = image.getRGB(x, y);
for (enlargeFactorPercentage = 0; enlargeFactorPercentage < 10; enlargeFactorPercentage++){
bi.setRGB(width, enlargeFactorPercentage, pixelRGB * dimAvg);
bi.setRGB(enlargeFactorPercentage, height, pixelRGB * dimAvg);
}
}
}
return bi;
I am not allowed to use any methods.
What does that mean? How can you write code if you can't use methods from the API?
int enlargeFactorPercentage
What is that for? To me, enlarge means to make bigger. So if you have a factor of 10 and your image is (100, 100), then the new image would be (110, 110), which means the border would be 5 pixels?
Your code is creating the BufferedImage the same size as the original image. So does that mean you make the border 5 pixels and chop off 5 pixels from the original image?
Without proper requirements we can't help.
#return a new image extended with borders
Since you also have a comment that says "extended", I'm going to assume your requirement is to return the larger image.
So the solution I would use is to:
create the BufferedImage at the increased size
get the Graphics2D object from the BufferImage
fill the entire BufferedImage with the color you want for the border using the Graphics2D.fillRect(….) method
paint the original image onto the enlarged BufferedImage using the Graphics2D.drawImage(…) method.
Hello and welcome to stackoverflow!
Not sure what you mean with "not allowed using methods". Without methods you can not even run a program because the "thing" with public static void main(String[] args) is a method (the main method) and you need it, because it is the program starting point...
But to answer your question:
You have to load your image. A possibility would be to use ImageIO. Then you create a 2D graphics object and then you can to drawRectangle() to create a border rectangle:
BufferedImage bi = //load image
Graphics2D g = bi.getGraphics();
g.drawRectangle(0, 0, bi.getHeight(), bi.getWidth());
This short code is just a hint. Try it out and read the documentation from Bufferedimage see here and from Graphics2D
Edit: Please notice that this is not quite correct. With the code above you overdraw the outer pixel-line from the image. If you don't want to cut any pixel of, then you have to scale it up and draw with bi.getHeight()+2 and bi.getWidth()+2. +2 because you need one pixel more at each side of the image.
I have a picture then used a flashlight type of light to only show where the mouse is hovering over. That part of the code works, but now I want to use if/else statements to zoom in on the selected area and then click again to zoom back out. Any other way to zoom in on specific area then back out of that area also helps. Really any help will be appreciated!
PImage ispy;
void setup () {
size(1024,768);
ispy = loadImage("ispy2.jpeg");
}
void draw () {
loadPixels();
ispy.loadPixels();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+y*width;
float r = red(ispy.pixels[loc]);
float g = green(ispy.pixels[loc]);
float b = blue(ispy.pixels[loc]);
float d = dist(mouseX, mouseY, x, y); //
float factor = map(d, 0, 200, 2, 0);
pixels[loc] = color(r*factor, g*factor, b*factor);
}
}
updatePixels();
}
Here is my interpretation of what you are talking about. We store a isClicked boolean to store the state of whether we should zoom in or not. When we are going to draw the image, we translate() to the mouse, then we scale(), then we translate() back the same amount that we moved before, but in the opposite direction. What this does is it does the scale transform around the mouse position.
One thing that I couldn't find a way around way your way of updating the pixels directly from the image and the flashlight effect. What the program is doing instead is using your method to make a mask image and applying that to a PGraphics object. Another thing that I noticed is that when just rendering straight to the screen, there is considerable lag from the scaling. Instead, I have moved the drawing to a PGraphics object. This improves the performance.
In the end, to render, the program is drawing everything on the PGraphics object, then applying the mask to that object to get the flashlight effect.
Here is the code that I have:
PImage ispy, distMask;
boolean isClicked = false;
PGraphics renderer;
void createDistanceMask(PImage distMask){ //Takes image and changes its pixels to "alpha" for the PGraphics renderer
distMask.loadPixels();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int loc = x+(height-y-1)*width;
float d = dist(mouseX, mouseY, x, y); //
int factor = int(map(d, 0, 200, 400, 0)); //Pixel data will be between 0 and 255, must scale down later.
if (factor > 255)
factor = 255;
distMask.pixels[loc] = color(factor,factor,factor);
}
}
distMask.updatePixels();
}
void setup () {
size(1024,768, P2D);
ispy = loadImage("ispy2.jpeg");
distMask = new PImage(width,height);
renderer = createGraphics(width,height,P2D);
mouseX = width/2; //Not necessary, but will have black screen until mouse is moved
mouseY = height/2;
}
void draw () {
background(0);
pushMatrix();
createDistanceMask(distMask);
renderer.beginDraw(); //Starts processing stuff into PGraphics object
renderer.background(0);
if(isClicked){ //This is to get the zoom effect
renderer.translate(mouseX, mouseY);
renderer.scale(2);
renderer.translate(-mouseX, -mouseY);
}
renderer.image(ispy,0,0); //Render Image
renderer.endDraw();
renderer.mask(distMask); //Apply Distance mask for flashlight effect
image(renderer,0,0); //Draw renderer result to screen
popMatrix();
}
void mouseClicked(){
isClicked = !isClicked;
}
In my comment, I asked about having the screen move to the mouse, which is what this is doing. If you want to "freeze" the screen in one position, what you can do is store a lastMouseClickPosition PVector or simply just ints. Then, when translating, translate to the position instead of the PVector.
Here's the code that would change:
PVector lastClickPos = new PVector(); //Make the position
if(isClicked){ //When Rendering
renderer.translate(lastClickPos.x, lastClickPos.y);
renderer.scale(scalingFactor);
renderer.translate(-lastClickPos.x, -lastClickPos.y);
}
void mouseClicked(){ //At the bottom
isClicked = !isClicked;
lastClickPos.set(mouseX, mouseY);
}
I am manipulating code of a image renderer that is making output image from Color[] array and my code simply update it with additional stuff right before saving, that is when the original image is actually prepared (all pixels positions prepared to be filled with RGBs in that Color[] array ready for final saving).
Reason why I am doing this is to have ability to insert text describing my render without need of another external graphics program that would do that (I want to have it all in one-go! action without need of another external app).
For that cause - as I have no reach/access for the original prepared BufferedImage (but I have access to actual Color[] that it is created from) I had to make my own class method that:
convert that original Color[] to my own temporary BufferedImage
update that temp. BufferedImage with my stuff via Graphics2D (adding some text to image)
convert my result (temp. BufferedImage with Graphics2D) back to Color[]
send that final Color[] back to the original image rendering method
that would actually make it to be the final image that is rendered out
and saved as png
Now everything works just fine as I expected except one really annoying thing that I cannot get rid off: my updated image looks very bleached-like/pale (almost no depth or shadows presented) compared to the original un-watermarked version...
To me it simply looks like after the image2color[] conversion (using #stacker's solution from here Converting Image to Color array) something goes wrong/is not right so the colors become pale and I do not have any clue why.
Here is the main part of my code that is in question:
BufferedImage sourceImage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
// Color[] to BufferedImage
for (int k = 0; k < multiArrayList.size(); k++) {
// PREPARE...
int x = (int) multiArrayList.get(k)[0];
int y = (int) multiArrayList.get(k)[1];
int w = (int) multiArrayList.get(k)[2];
int h = (int) multiArrayList.get(k)[3];
Color[] data = (Color[]) multiArrayList.get(k)[4];
int border = BORDERS[k % BORDERS.length];
for (int by = 0; by < h; by++) {
for (int bx = 0; bx < w; bx++) {
if (bx == 0 || bx == w - 1) {
if (5 * by < h || 5 * (h - by - 1) < h) {
sourceImage.setRGB(x + bx, y + by, border);
}
} else if (by == 0 || by == h - 1) {
if (5 * bx < w || 5 * (w - bx - 1) < w) {
sourceImage.setRGB(x + bx, y + by, border);
}
}
}
}
// UPDATE...
for (int j = 0, index = 0; j < h; j++) {
for (int i = 0; i < w; i++, index++) {
sourceImage.setRGB(x + i, y + j, data[index].copy().toNonLinear().toRGB());
}
}
}
Graphics2D g2d = (Graphics2D) sourceImage.getGraphics();
// paints the textual watermark
drawString(g2d, text, centerX, centerY, sourceImage.getWidth());
// when saved to png at this point ALL IS JUST FINE
ImageIO.write(sourceImage, "png", new File(imageSavePath));
g2d.dispose();
// BufferedImage to Color array
int[] dt = ((DataBufferInt) sourceImage.getRaster().getDataBuffer()).getData();
bucketFull = new Color[dt.length];
for (int i = 0; i < dt.length; i++) {
bucketFull[i] = new Color(dt[i]);
}
// update and repaint output image - THIS OUTPUT IS ALREADY BLEACHED/PALE
d.ip(0, 0, width, height, renderThreads.length + 1);
d.iu(0, 0, width, height, bucketFull);
// reset objects
g2d = null;
sourceImage = null;
bucketFull = null;
multiArrayList = new ArrayList<>();
I have tested (by saving it to another .png file right after the Graphics2D addition) that before it gets that 2nd conversion it looks absolutely OK 1:1 to the original image incl. my text on that image.
But as I said when it is send for render it becomes bleached/pale that is a problem I am trying to solve.
BTW I first thought it might be that Graphics2D addition so I did try it without it but the result was the same, that is bleached/pale version.
Although my process and code is completely different the output image is basically suffering exactly the same way as in this topic (still not solved) BufferedImage color saturation
Here are my 2 examples - 1st ORIGINAL, 2nd UPDATED (bleached/pale)
As suspected, the problem is that you convert the color values from linear RGB to gamma-corrected/sRGB values when setting the RGB values to the BufferedImage, but the reverse transformation (back to linear RGB) is not done when you put the values back into the Color array.
Either change the line (inside the double for loop):
sourceImage.setRGB(x + i, y + j, data[index].copy().toNonLinear().toRGB());
to
sourceImage.setRGB(x + i, y + j, data[index].toRGB());
(you don't need the copy() any more, as you no longer mutate the values, using toNonLinear()).
This avoids the conversion altogether.
... or you could probably also change the line setting the values back, from:
bucketFull[i] = new Color(dt[i]);
to
bucketFull[i] = new Color(dt[i]).toLinear();
Arguably, this is more "correct" (as AWT treats the values as being in the sRGB color space, regardless), but I believe the first version is faster, and the difference in color is negligible. So I'd probably try the first suggested fix first, and use that unless you experience colors that are off.
I'm making a game in java, is a rpg, however, only with the map the game is slow.
The map is made in TiledMap Editor, therefore, an XML that is read and loaded into an ArrayList. My PC is a dual-core 3.0, 4GB RAM, 1GB Video.
The do the rendering is done as follows:
//method of tileset class
public void loadTileset(){
positions = new int[1 + tilesX * tilesY][2];
int yy = 0;
int xx = 0;
int index = 0;
// save the initial x and y point of each tile in an array named positions
// positions[tileNumber] [0] - X position
// positions[tileNumber] [1] - Y position
for(int i = 1 ; i < positions.length; i++){
if(index == tilesX ){
yy += tileHeight;
xx = 0;
index = 0;
}
positions[i][0] = xx;
positions[i][1] = yy;
xx += tileWidth;
index++;
}
}
//method of map class
public void draw(Graphics2D screen){
//x and y position of each tile on the screen
int x = 0; int y = 0;
for(int j = 0; j < 20 ; j++){
for(int i = initialTile ; i < initialTile + quantTiles ; i++){
int tile = map[j][i];
if(tile != 0){
screen.drawImage(tileSet.getTileImage().getSubimage(tileSet.getTileX(tile), tileSet.getTileY(tile),tileSet.getTileWidth(), tileSet.getTileHeight()),x,y,null);
}
x += tileSet.getTileWidth();
}
x = 0;
y += tileSet.getTileHeight();
}
}
Am I doing something wrong?
Note: I'm new to the forum and to make matters worse I do not understand very much English, so excuse any mistake.
First of all, you should not create the subimages for the tiles during each call. Strictly speaking, you should not call getSubimage at all for images that you want to paint: It will make the image "unmanaged", and this can degrade rendering performance by an order of magnitude. You should only call getSubimage for images that you do not want to render - for example, when you are initially creating individual images for the tiles.
You obviously already have a TileSet class. You could add a bit of functionality to this class so that you can directly access images for the tiles.
Your current code looks like this:
screen.drawImage(
tileSet.getTileImage().getSubimage(
tileSet.getTileX(tile),
tileSet.getTileY(tile),
tileSet.getTileWidth(),
tileSet.getTileHeight()),
x,y,null);
You could change it to look like this:
screen.drawImage(tileSet.getTileImage(tile), x,y,null);
The getTileImage(int tile) method suggested here could then obtain tiles that have been stored internally.
I'll sketch a few lines of code from the tip of my head, you'll probably be able to transfer this into your TileSet class:
class TileSet
{
private Map<Integer, BufferedImage> tileImages;
TileSet()
{
....
prepareTileImages();
}
private void prepareTileImages()
{
tileImages = new HashMap<Integer, BufferedImage>();
for (int tile : allPossibleTileValuesThatMayBeInTheMap)
{
// These are the tiles that you originally rendered
// in your "draw"-Method
BufferedImage image =
getTileImage().getSubimage(
getTileX(tile),
getTileY(tile),
getTileWidth(),
getTileHeight());
// Create a new, managed copy of the image,
// and store it in the map
BufferedImage managedImage = convertToARGB(image);
tileImages.put(tile, managedImage);
}
}
private static BufferedImage convertToARGB(BufferedImage image)
{
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g = newImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
// This is the new method: For a given "tile" value
// that you found at map[x][y], this returns the
// appropriate tile:
public BufferedImage getTileImage(int tile)
{
return tileImages.get(tile);
}
}
I'm new to Android programming and now I'm trying to make a simple Sea Battle game for one person. Ships are places, player hits the field and see whether the shot hit or not.
Basically, the field looks like this:
The code is:
public void onDraw(Canvas canvas) {
if (getWidth() > getHeight()) {
rebro = getHeight();
} else {
rebro = getWidth(); // the smaller size of screen is "rebro"
}
rebro_piece = rebro / 10; // divide the screen by 10 (to get 10x10 field)
Paint background = new Paint();
background.setColor(getResources().getColor(R.color.game_background));
canvas.drawRect(0, 0, rebro, rebro, background); // draw background
Paint divider = new Paint();
divider.setColor(getResources().getColor(R.color.divider_black));
// drawing divider lines
for (int i=0; i<11; i++) {
canvas.drawLine(0, i*rebro_piece, rebro, i*rebro_piece, divider); // horizontal
canvas.drawLine(i*rebro_piece, 0, i*rebro_piece, rebro, divider); // vertical
}
canvas.drawLine(rebro-1, 0, rebro-1, rebro, divider);
}
That's how I make the "field."
In another class I have a method that collects numbers x and y of a 10×10 array that represents where the ships are placed. For debugging, I need to draw them on my field. Ship coordinates are retrieved in a cycle.
So I wrote a drawShip(int x, int y) method.
On Stack Overflow I've founded a question about "Why I can't paint outside onDraw()?" and I've changed my method to this:
public void drawShip(int x, int y) {
myX = x; //global
myY = y; //global
needToPaintShip = true; //boolean
invalidate(); // refreshing?
needToPaintShip = false;
}
Here needToPaintShip decides whether the redrawing of canvas is needed or not.
Also I've edited the onDraw(Canvas canvas) method:
if(needToPaintShip == true) {
Paint ship = new Paint();
ship.setColor(getResources().getColor(R.color.ship_color));
Log.d(TAG, "onDraw(): rebro_piece = " + rebro_piece + " , myX = "+ myX + " , myY = " + myY); // I only get the last coordinates!
Rect r = new Rect(myX*(rebro_piece),myY*rebro_piece, myX*(rebro_piece+1), myY*(rebro_piece+1));
canvas.drawRect(r, ship);
}
but the result is awful:
Guys, I'm desperate. How can I fix this and make "ships" be drawn on the field?
Why do you set needToPaintShip = false; after calling invalidate()? Don't you need to draw the ship again in subsequent frames?
Also, it seems like this item:
Rect r = new Rect(myX*(rebro_piece),myY*rebro_piece, myX*(rebro_piece+1), myY*(rebro_piece+1));
should probably be:
Rect r = new Rect(myX*(rebro_piece),myY*rebro_piece, (myX+1)*rebro_piece, (myY+1)*rebro_piece));
As for why the ship always appears in the bottom right corner, that depends on what you pass to drawShip(x,y), which isn't shown. Is it possible that you are passing pixel coordinates instead of something in the range [0-10)?