So, I'm programming a basic TowerDefense game. It's coming together great, but I'm stuck trying to preload images. What my predicament is is that when I call the paint method for my Canvas class (it extends JPanel) it opens a thread called "Image Fetcher x" where 'x' is a number starting with 0 and going up for multiple instances of the thread. As far as I understand, this thread is taking the images from my image variables, locating them on the disk, loading them into RAM for my game to display, and then drawing them.
This works fine, however, in the middle of the game, there are moments where nothing appears because the draw method I call is attempting to draw an image which hasn't been loaded. Gameplay is still effective, but the visual representation is messed up because the player basically sees a grass block shooting an enemy. Then all of a sudden the image loads and is drawn and looks normal.
This is fine and all, but I want to be able to pre-load all the resources (and possibly make a loading screen so the player knows that the game is loading) so that the non-loaded images don't destroy the illusion of gameplay - if that makes sense. I have tried using a MediaTracker, that didn't work. I've researched this and can't seem to find anything anywhere on this.
My current code consists of arrays of images from an image map. Each image in an array is drawn on the canvas according to certain identifiers which tell the game which tile/tower to draw. I am using the java.awt.Image class to store my images. So, is there any way to preload my images so that they don't have to load mid-gameplay?
Thanks!
Edit:
Code:
for(int y=0; y<tileMap.length; y++) {
for(int x=0; x<tileMap[y].length; x++) {
tileMap[y][x] = new ImageIcon("res/tileMap.png").getImage();
tileMap[y][x] = createImage(new FilteredImageSource(tileMap[y][x].getSource(), new CropImageFilter(x * Room.blockSize, y * Room.blockSize, Room.blockSize, Room.blockSize)));
}
}
public void paintcomponent(Graphics g) {
g.drawImage(tileMap[1][2], 5, 7, null);
}
That's a simplified version, I have about 1500 lines so I don't think you want to see all that, lol. Basically, I just create the image variables and then draw them. At the point of drawing them, an Image Fetcher thread is initiated, loads the image and then disappears after the image has finished loading. While the image is loading (and the Image Fetcher thread is active) the drawImage method draws nothing since the image isn't preloaded into RAM. However, once it's done loading the image, the drawImage method actually draws the correct image and the Image Fetcher thread is no longer active. Is there a way to either initiate the Image Fetching process so that this all happens before starting the program?
I've also tried just drawing everything at the beginning, that didn't work since the program runs too fast for the Image Fetchers to successfully load every image before the user plays. idk, the Image Fetchers are complicated. I just need to know how to preload images into RAM.
Related
I want to capture the screen. Libgdx provides some functions defined in the ScreenUtils class.
For instance final Pixmap pixmap = ScreenUtils.getFrameBufferPixmap(x, y, w, h);
My problem is that it takes about 1-3 seconds to get that information. During that time the game loop is blocked and the user will define it as a lag.
Internally the ScreenUtils class, uses Gdx.gl.glReadPixels. If I run the ScreenUtils.getFrameBufferPixmap method in a thread, there is no lag, but it captures the screen at the wrong method, which is logical since the game loop runs and changes stuff.
Is it somehow possible to copy the GL context or save it and generate the Pixmap at a later point of time with the help of a Thread? Of course the copy/save operation should be done instantly, otherwise I don't spare anything.
I use ShapeRenderer to render my stuff onto the screen.
After little research I have found faster alternative to glReadPixels - Pixel Buffer Object which is available since Android 4.3 - https://vec.io/posts/faster-alternatives-to-glreadpixels-and-glteximage2d-in-opengl-es
It is not possible to call glReadPixels on other thread than render thread - https://stackoverflow.com/a/19975386/2158970
I have found a solution, which works in my case since I have only minor graphics / shapes. Basically I copy all the objects, which are drawn to the view, as described here. In a background thread I generate a Bitmap programmatically and use the information stored in my objects to draw to the bitmap.
I'm developing an Android game using Java, and I am currently on trying to figure out an efficient way of rendering the necessary textures. Suppose you have a Grid, similar to a Checkers board layout, and Tiles to fill that grid, as in each square on the board. That is the concept of what will be displayed. Currently, I am drawing each tile one by one. All of the texture loading is done upon creation and is only done once, not upon drawing. Now, for what I want to do. I've noticed that drawing all one by one, although fast for what I'm doing, it can be glitchy. In my game, the user has the ability to drag the "board" to view different areas. As of right now, I'm only allowing the necessary tiles to be drawn depending on the location of the top left visible tile. As I said, it works quite fast, but, once the user starts interacting more or dragging faster, the rendering starts to have difficulties and isn't as fast as it should be. This causes little separation in between the tiles. It's not large, just large enough to be noticeable. What I want to do is to basically place each texture in a certain location as defined by the grid, thus creating a new texture containing the viewable area, and then render that entire area as opposed to render each tile separately. I've done a lot of research and already looked at many questions, but I still have not found something that will help my cause. I've read that rendering to texture using a framebuffer may help, but I haven't found any easy-to-follow tutorials or examples, just a lot "here's the code, no explanation" or "here's something similar to what you want, but it's using different things." So, if someone could point me towards a good tutorial/example, or post a valuable answer, I would be very grateful. I'm avoiding OpenGL ES 2.0 because I want my game to be compatible with many devices, and for what I'm doing, 2.0 is not necessary.
For a quick summary of what my code does for further explanation:
for(go through visible rows){
for(go through visible columns){
drawTile(); //Does the texture binding and drawing for each tile
}
}
What I want:
for(go through visible rows){
for(go through visible columns){
loadTileTextureIntoGridTexture();
//I want it to combine the textures into one texture
}
}
drawGridTexture();
Doing it the second way will only have one whole texture to render as opposed to visibleRows*visibleColumns textures to render.
I'm taking the AP computer science class at my school. They never taught us about GUIs because the AP test didn't require you to know how to make one.
For our final project we wanted to make something like tron (game where bikes move around an are creating walls of light behind them in an attempt to crash the other player). Before we continue i just want to make sure we are going in the right direction. Should we use ImageIcon for the players or maybe something else?
We still have a lot to learn, but i thought this would be a good thing to start with. The reason why i'm asking is because i'm not sure if we would be able to move them without opening another window every time we want to move something.
This is probably a matter of opinion, but personally, I would use BufferedImage's as the primary image container.
The main reasons are:
They are easy to paint and easy to manipulate, you can actually draw onto them if you need to.
Loading a BufferedImage is done through ImageIO.read, it guarantees that the entire image is loaded before the read method returns and will throw an IOException if it can't read the image for some reason, which is better than ImageIcon which loads the image in a background thread and doesn't report errors if the image failed to load
Movement or placement of the images would be done, typically, by painting them onto an output Graphics context. This is (typically) done by overriding the paintComponent method of something that extends JComponent and using the passed Graphics context and Graphics#drawImage
Have a look at Performing Custom Painting, 2D Graphics and Reading/Loading an Image for more details
I'm working on a painting application using the LibGDX framework, and I am using their FrameBuffer class to merge what the user draws onto a solid texture, which is what they see as their drawing. That aspect is working just fine, however, the area the user can draw on isn't always going to be the same size, and I am having trouble getting it to display properly on resolutions other than that of the entire window.
I have tested this very extensively, and what seems to be happening is the FrameBuffer is creating the texture at the same resolution as the window itself, and then simply stretching or shrinking it to fit the actual area it is meant to be in, which is a very unpleasant effect for any drawing larger or smaller than the window.
I have verified, at every single step of my process, that I am never doing any of this stretching myself, and that everything is being drawn how and where it should, with the right dimensions and locations. I've also looked into the FrameBuffer class itself to try and find the answer, but strangely found nothing in there either, but, given all of the testing I've done, it seems to be the only possible place for this issue to be created somehow.
I am simply completely out of ideas, having spent a considerable amount of time trying to troubleshoot this problem.
Thank you so much Synthetik for finding the core issue. Here is the proper way to fix this situation that you elude to. (I think!)
The way to make frame buffer produce a correct ratio and scale texture regardless of actual device window size is to set the projection matrix to the size required like so :
SpriteBatch batch = new SpriteBatch();
Matrix4 matrix = new Matrix4();
matrix.setToOrtho2D(0, 0, 480,800); // here is the actual size you want
batch.setProjectionMatrix(matrix);
I believe I've solved my problem, and I will give a very brief overview of what the problem is.
Basically, the cause of this issue lies within the SpriteBatch class. Specifically, assuming I am not using an outdated version of the class, the problem lies on line 181, where the projection matrix is set. The line :
projectionMatrix.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
This is causing everything that is drawn to, essentially, be drawn at the scale of the window/screen and then stretched to fit where it needs to afterwards. I am not sure if there is a more "proper" way to handle this, but I simply created another method within the SpriteBatch class that allows me to call this method again with my own dimensions, and call that when necessary. Note that it isn't required on every draw or anything like that, only once, or any time the dimensions may change.
I'm developing AppEngine application. One of it's features is splitting an animated .gif image into separate frames. I've searched a lot to find the way how to do it and finally found the solution. Unfortunately the solution is based on ImageReader and I cant use it on the server, because:
javax.imageio.ImageReader is not supported by Google App Engine's Java
runtime environment
Are there any other ways to decode GIF-image without this class?
First some thing about frame itself. There are two implications about splitting an animated .gif image into separate frames. 1) Literally, a frame is a frame in the sense of an animated GIF. The problem is frames which constitute an animated GIF image are related. The disposal method of an animated GIF dictates what to do with the previous frame when drawing the current frame. You can override it; fill with background color before drawing the new frame, or you can do whatever you think appropriate before drawing the new frame. If you think the above situation is complicated, what about transparency of frames? logical position to draw each frames?
If we go along this road, there is no need to use a dedicated ImageReader, just read relevant parts of the image and copy each frame data, save it along with a header and color palette. The consequence is: the resulting image might look weird and meaningless. Look at the example below:
The first frame
The second frame
And the original
You can see the second frame doesn't look so good. The truth is, the second frame is a transparent one which build on top of the first frame (this animated GIF only contains 2 frames). You are expect to see through the second frame and altogether, they make an animation.
Now let's see what the second implication of splitting an animated .gif image into separate frames. 2) In this case, the frame is a actually is a composite which builds upon the previous frames and which is what we are seeing when viewing an animated GIF. In order to achieve this, we have to take into effect the history of the frame loop, the logical position of each frame, and the transparency of the frames themselves.
Let's see what we get now:
The first frame
The second frame
Now the first frame is the same as in the first situation, but the second frame is constructed on top of the first one and it's not transparent anymore.
In the second case, we do have to decode and encode the frames to achieve the desired result. Besides looking nice, another good thing about this is you can save the resulting images in any format the encoder support.
The examples in this post are generated by the GIF related part of iCafe