Graphics2D multi-composite of an isometric tile map - java

I'm working on a game which uses isometric tile-based maps for planetary surfaces (example). Each of the tile cell is a small BufferedImage element. Day and night cicles are implemented by modifying the pixels of these images (darkening and blue shifting), then a window-light map (another static BufferedImage) is rendered over each building. The proper Z-order is kept by going in top-right-bottom-left order and only strips of the tiles are drawn. Unfortunately, this approach practically destroys the acceleration of the images and the rendering becomes very slow if the day-night transition happens on very fast game speed. The current solution is to cache a number of shades for each tile at the cost of huge memory increase.
Can the Java composition modes of Graphics2D used for this purpose, e.g, draw the normal colored tile, draw a darkened surface over it, then apply the light-map? How can I ensure that only the pixels of the normal tile are affected by the recoloring?

The fastest approach to me seems something like the following:
Render the view with plain images to the target buffer
Render the view with lightmap images on a second BufferedImage
Draw the second BufferedImage on the target buffer using a custom CompositeContext
The custom CompositeContext would first apply the darkening + blue shifting, and then multiply by the lightmap value.
Also look into the use of VolatileImage (see this article for example) to leverage hardware acceleration. Furthermore, you can consider using LWJGL (a complete game dev library) or JOGL (Java OpenGL bindings), again to leverage hardware acceleration. If you've never done any 3D programming it may seem daunting or overkill, but it's really quite elegant and straightforward for 2D graphics as well.

Related

(Java) Graphics change resolution?

I'm drawing to a Canvas using Graphics through a BufferStrategy with lines such as
g.drawImage(bufferedImage, x, y, null);
I currently have this running undecorated in a JFrame, 1920x1080p as per the resolution of my laptop. I'm curious as to whether there is any way to alter the resolution of the Graphics rendered, particularly lowering resolution so as to increase efficiency/speed, or fitting to another differently sized screen. There are many objects being rendered with a camera and the game runs fairly well, but any usable alterations to the resolution would be useful as optional in my settings.
I've researched this and found no good answers. Thank you for your time.
(Resolution changes such as for printing.)
Best to use a drawImage with a smaller image, and scaled width and height.
Now, you could even render all in your own BufferedImage using a Graphics2D with BufferedImage.createGraphics and scale afterwards. Not so nice for text or printing.
Or use Graphics2D scaling:
For complex rendering:
g.scale(2.0, 2.0);
... // Draw smaller image
g.scale(0.5, 0.5);
As you might imagine this probably does not help in memory consumption, apart from needing smaller images. At one point all pixels of the image must be given in the devices color size. 256 colors gif, or 10KB jpg will not help.
The other way around, supporting high resolutions with tight memory also exists. There one might use tiled images, see ImageIO.
Important is to prepare the image outside the paintComponent/paint.
You might also go for device compatible bit maps if you make your own BufferedImage, but this seems circumstantial (GraphicsEnvironment).

Ray Casting a transparent PNG in Java with LWJGL

Im making a game and id like to implement raycasting for the hero's laser (and other stuff in the future), i have my sprites in a sprite sheet which i bind in the beggining and access when i draw since each element knows how to draw itself, but the spritesheet is a PNG, and thus some elements posess transparency, which works ok in openGL. i know each element's position, size etc but if some sprites have transparency, the position and size arent enough for the ray cast to be perfect since it would only hit the "bounding box". So is there a way to throw a ray using Bresenham algorithm (i believe it is the lightest way, correct me if im wrong) and make it pixel perfect in openGL, so that i can acquire the collision point of the ray with the actual non-transparent zone of the first sprite it appears in the way?
There is no easy way to do this. You would have to create a custom collision checker for your raycast to see if it would pass through or if it would collide with part of the sprite.
However it might be a better idea to use a smaller bounding box, or a circle to represent it, or both. These are much easier and faster to calculate then checking every pixel within the texture.

Java drawImage interpolation

My question involves the drawImage method in Java Graphics2D (this is for a desktop app, not Android).
My BufferedImage that I'd like to draw contains high resolution binary data, most pixels are black but I have some sparse green pixels (the green pixels represent data points from an incoming raw data stream). The bitmap is quite large, larger than my typical panel size. I made it large so I could zoom in and out. The problem is when I zoom out I lose some of my green pixels .. as an example if my image is 1000 pixels and by panel is 250 pixels, I'd lose 1 out of 4 pixels in each direction (X and Y). If I use nearest neighbor interpolation when I scale the pixels can just disappear to black. If I use something like bilinear interpolation my green pixel will get recolored to somewhere between black and green.
I understand all this behavior, but my question is that is there any way to get the behavior I want, which is to make sure if any pixels is non-black I want it to be drawn at it's full intensity. Perhaps something like a "max-hold" interpolation.
I realize I could probably do what I want by drawing shape primitive over a black background, and maybe this is what I'll have to do. But there is a reason I'm using bitmaps (has to do with the fact that I'm showing the data in a falling spectrogram-type display - and it does have a mode where all the pixels could be colored and not just black and green).
Thanks,
You could look at the implementation of drawImage and override it to get your desired behaviour, however probably the core of the scaling uses hardware acceleration, so re implementing it in Java would be really slow.
You could look into JOGL, but my impression is that, if your pixels are really sparse, just drawing the green pixels on a black background (or over an image) would be both easy to code and very fast.
You could even have an heuristic switching between painting the dots to scaling the image if the number of dots starts being too high.

Generating Very Large Images at Runtime with OpenGL and libgdx

I am generating very large hex grids (up to 120k total hexes at 32px wide hexes results in over 12k wide images) and I'm trying to find an efficient way to bind these to OpenGL textures in libgdx. I was thinking of using multiple FBOs and breaking the grid up as necessary into tiles, but I'm not sure how to ensure continuity between the FBOs. I can't start with one massive FBO, because that is backed up by a texture so it would fail from trying to load it to video memory. I can't use a standard bitmap on the heap because I need the drawing functionality of an OpenGL surface.
So what I was thinking was I would need to overdraw on the FBOs and somehow pick up on the next FBO exactly where the previous left off. However I'm not sure how to go about this. I'm drawing the hex grid with a series of hexagonal meshes, FYI.
Of course, there's probably some other much simpler and more efficient way to do this that I'm not even thinking of, which is why I pose this question to you fine people!
You have to draw it in pieces. You need to be able to draw your hex grid from an arbitrary position. This means being able to compute which hexes to draw based on a rectangle overlaid over the map. This isn't a hard problem, and I wouldn't worry too much about drawing extra stuff off-screen. You should master this ability to view the hexmap from any position before moving on.
Once you've mastered that, it's really simple.
Draw the top-left corner and store the pixel data. Then move the area you're drawing over exactly one image width. Draw and store that. Move the area over one image width. Draw and store it. Keep doing that until you've covered the entire width.
Move down one image height and repeat the process. Once you've run out of width and height, you're done. Save your mega-huge image.
You don't need FBOs for this. You could draw it to the screen if you wanted. Though if you want maximum performance, I would suggest using FBOs, double buffering them, and using glReadPixels though a pixel buffer object. That should cut down a lot on latency.

Using a canvas much larger than the screen

I'm trying to draw a 2D contour plot of some data on Android and I'm wondering what the best approach would be to draw those. The whole datasets can be relatively large (2k * 2k points) and zooming and moving inside the plot should be very fast. Most of the time only a small part of the data will be drawn as the user has zoomed in on the data.
My idea now would be to draw the whole plot onto a large canvas, but clip it to the portion visible on the screen, so that only that part would be really drawn in the end. I find the 2D drawing API of Android somewhat confusing and I'm not sure if this is really a feasible approach and how I would then go about executing it.
So my questions are:
Is it a good idea to draw onto a canvas much larger than the screen and use clipping to display only the relevant part?
How would I create a larger canvas and how would I select which parts should be drawn?
You should start the other way around. Instead of creating a huge canvas you should detect what part of your plot you need to draw and draw only that.
So basically you need some navigation/scrolling and you need to keep the offset from the starting point in memory to calculate where you are. Using the offset you can easily zoom in and out because you just need to scale the plot to the screen.
Is it a good idea to draw onto a
canvas much larger than the screen and
use clipping to display only the
relevant part?
A better question might be, do you have any other options. Some might argue that this is a bad idea since your going to keep memory in use when it isn't relevant to whats happening on the UI. However, from my experiences with the Canvas, I think you'll find this should work out just fine. Now, if you are trying to keep "5 square miles" of canvas in memory your definitely going to have to find a better way to manage it.
How would I create a larger canvas and
how would I select which parts should
be drawn?
I would expect that you will be creating your own "scrolling" method when the user touches the screen via overriding the onTouchEvent method. Basically your going to need to keep track of a starting point X and Y and just track that value as you move the Canvas on screen. In order to move the Canvas there are a number of built in's like translate and scale that you can use to both move the Canvas in X and Y as well as scale it when the user zooms in or out.
I don't think that is a good idea to draw your 2D contour plot on a big bitmap because you need a vector type graphics to zoom in and out in order to keep it sharp. Only pictures are good to scale down but graphs will lose thin lines or come out deformed when scaled down in bitmaps.
The proper way is to do it all mathematically and to calculate which part of the graph should be drawn for required position and zoom. Using anti_alias paint for lines and text, the graph would always come out sharp and good...
When the user zooms out, some items should not be drawn as they could not fit into the screen or would clutter it. So the graph would be always optimised for the zoom level...

Categories