Lets say I have to draw a truck on a game I am making. Which would be better performance wise?
Drawing the truck with lines and filling and color changes
Finding an image/making an image of the truck and drawing it on the game and moving the image around.
Thanks for any input
I suspect that the image would be easier to code.
You'd have to test your two examples to see, but I'd be surprised if there was a measurable difference. All of the pixels on the screen have to be redrawn, no matter which method you choose.
The image would be faster, as all it has to do is copy the pixels over to the buffer, and display them.
Painting the truck would provide greater flexibility over what you can do with it. Ultimately, there will be VERY LITTLE difference, especially if all you're drawing is a few shapes.
I've made something similar a few months ago, basically a bomber man game in java. It was so much easier to just use Java's built in Image class. There game was very smooth and lag free, despite loading a few hundred pictures, and displaying ~40 or so at once (and repainting them, including gis)
I'll bet the performance will be more or less the same, unless the filling algorithm is slow.
Related
1) Re-Draw Vs Draw
Kind of a philosophical question, but... what is the "correct" or "accepted" way to render a game (2d, I understand how OGL perspecives work...) at different resolutions? Should I include separate sizes for my images (like Android APKs) and resize each object individually at draw on one canvas, or should I draw on a set-resolution drawing canvas, then resize that image onto another display canvas? I'm speaking generally, but, if you need me to be specific I'm using Java to build the engine.
Foreseeable benefits/issues:
#1) Resize at Draw
+ No additional drawing step
+ Sweet resolution
- Possible math/physics/placement issues
- Tons of math each step for scale
- Lots of resources
#2) Resize at Render
+ No additional math; one step
+ One set of images; smaller res. package
+ One set canvas size (easier to do math/phys./placement)
- Additional drawing step
- Poor resolution =(
It would seem that #2 is the obvious choice because of the number of benefits vs issues, but... is it? Is there a standard way to resize 2D games?
2) JOGL + Java2D + Java Swing
Would it be cumbersome to use JOGL, Java2D, and Java Swing at the same time? Would it be worth it to do 2D or layouts in JOGL? Why or why not?
EDIT: Using a BufferedImage to draw on and rendering the BufferedImage to the size of the panel with respect to aspect ratio is incredibly inefficient in swing. Apparently it's better to draw immediately to the panel, while resizing each image/element individually. Not my first hypothesis...
EDIT 2: Silly me... just scale and translate the graphics context to the size of the adjusted resolution before any other operations. The performance boost is super-dooper awesome. THIS is the correct answer to the question. DRAW ONCE, to scale/translation. B)
I can't answer your second question fully as I do not have much experience with JOGL or Java2D, but I don't see any reason for them to ever conflict or be cumbersome.
For your first question, I can definitely say that it depends. What's your target audience? Is your game memory intensive (Ever notice that many games have a high res/low res option)? Is this a game that will be available for vastly different screen sizes? If so you might want to provide 2-3 different "packages" of your assets, each scaled at a different size of the original (the largest one). The math to draw the images isn't as much as you think.
In addition:
If you build the game the right way, you wouldn't have to do much math at all. If you have some sort of Camera class that takes care of the viewing of your GameWorld then you simply have to scale the Camera's image instead of scaling each image independently.
I'm making a 2D game in Java and one of the main issues causing low FPS (on my slow laptop) is having to re-draw complex structures to a Graphics instance, such as dials with markings.
The dial and its markings will never change unless the window is resized, so I thought it would be a good idea to draw to a BufferedImage and just re-draw the image rather than re-drawing the details. The position of the needle obviously changes, so this can just be drawn on top.
I've never heard about this being done to improve the FPS of 2D games so I'm wondering if it's actually good practice to store a cache of images or if there's a better way to solve this sort of problem? Are there any issues associated with this that I haven't considered?
Caching images isn't a bad idea: you can rely on raster rendering to be pretty well optimised on most any platform. In my experience (which is admittedly mostly on mobile devices where 2D graphics are concerned) the Graphics.drawXXX() methods are often considerably slower than Graphics.drawImage().
In my experience the vast majority of 2D games out there make use of sprites (i.e. images) for rendering just about everything. Often that's true even when the graphics look like they are rendered using primitives!
Another useful technique to think about is not redrawing regions at all unless you really need to!
EDIT:
As others have mentioned, the major tradeoff is that you're going to be using more memory. You're also going to have to make sure you free up those images once you no longer need them.
Is it good practice to cache parts of a 2D drawing?
You're making a trade-off between drawing speed and storage space. Only you can determine which is more important.
You might consider rendering your dials in advance and saving the images as GIF, JPG, or PNG files. You would have to scale these images to your window size before you draw them.
Are you using double buffering for your Graphics panel?
Yes, that is a good practice, and it's done all the time. Drawing to an image first before displaying it on the screen is called double buffering, and that method can be used in different ways according to the needs of the program.
The downside of double buffering is memory, since it takes more memory to store the second image, but that sounds like a trade-off you'll need to make.
So I'm writing a sort of particle simulator, like a "falling sand game" if you know what that is, and I've kind of hit a roadblock now. The way I'm doing this is I have a particle object that basically as of now has an position (int x, int y) and that's it. The way I'm drawing/moving them, is with a thread and the onDraw event for an android panel. Each time onDraw is called I loop through all the particles, move them down one pixel unless they hit the bottom and then draw them, this is pretty smooth until I get to about 200 particles, then the fps drops significantly. I know this is computation heavy the way I'm doing it, there's no debate about it, but is there any way I could do this to allow a lot more particles to be drawn and with less lag?
Thanks in advance.
I take it you're using an individual-pixel drawing function for this? That would indeed be slow.
I see a couple ways to improve it. First is to put the pixels into an in-memory bitmap then draw the whole bitmap at the same time. Second, since particles are always just going down one pixel, you can scroll part of the bitmap instead of replotting everything. If Android doesn't have a scroll then just draw the bitmap one pixel down and start a new bitmap for the particles above the scroll. You'll have to fix up the particles on the bottom, but there are fewer of those.
I've never done things like this before, but I have done some complex cellular automata. Sorry if this is too vague.
The basic idea here is to mark all particles that should "keep falling" or "not move" and exclude them from complex processing (with a special short/fast processor for the "falling" list - all you need to do is drop each one by a pixel).
The acceleration for non-moving particles - static particles (I'll call them S particles), is that they don't move. Mark it for all non-moving regions (like a gravity-immune "wall" or "bowl" that a user might make. Mark particles above it S if they are stable, so for example for liquid, if it has S particles under, and to both sides of itself, it will not move. For something like sand that forms piles, if it has an S in each of the three spots under it, it makes a pile, you'll get nice 45-degree piles like this, I'm sure you can change it to make some things form steeper, or less-steep piles. Do S mapping bottom-up.
The acceleration for particles with no particle under them is falling - F particles. Particles with an F particle under them are also F particles. Mark these bottom-up as well.
Particles unmarked F or S are complex, they may start falling, stop falling, or roll, use the slow processor, which you already have, to deal with them, there shouldn't be many.
In the end what you will have is many many fast particles. Those in a pile/lake and those raining down. The leftover particles are those on the edge of slopes, on the top of lakes, or in other complex positions. There shouldn't be nearly as many as there will be fast particles.
Visually mark each kind of particle with some colour, complex particles being bright red. Find cases where it is still slow, and see what other kinds of fast processors you should make. For example you may find that making lots of piles of sand creates lots of red areas along slopes, you may want to invest in speeding up "rolling zones" along the slopes of piles.
Hope it makes sense. Don't forget to come back and edit once you've figured something out!
You may want to look into OpenGL ES hardware acceleration and renderscript. It doesn't give you a more efficient solution code wise (see the other answers for that). It does open up a lot more processing power for you to use however. You can even run the entire simulation on the GPU (possibly, don't know your implementation details).
Edit
Also, if you still decide to do the processing in java, you should look at Method Profiling in DDMS. This will help you visualize where your performance bottlenecks are.
If you blur your image a bit, then you could just move half particule at a time, maybe one fourth only and print them all.. that would cut computation and the user wouldn't see it, getting the feeling all particules move.
But what ever you choose, I think you should be put a strong limit, not all users have powerfull android devices.
Regards,
stéphane
I think if particles are close each other, you can create objects that represent 3 or more particles.
When displaying several particles on screen, sets of grains maybe gets unnoticed.
I'm trying to develop side scrolling game for android involving many many textures so I was thinking if I could create a separate layer, all a single unique color (very similar to a green screen effect) make a collidable and make it invisible to the player.
(foreground layer) visual Image
(2nd layer)collidable copy of foreground layer with main character
(3rd layer)Background image
I not sure if this is possible or how to implement it efficiently, the idea just came to me randomly one day.
Future regards, Thanks
I assume your game is entirely 2D, using either bit-blits or quads (two 3D triangles always screen-aligned) as sprites. Over the years there have been lots of schemes for doing collision detection using the actual image data, whether from the background or the sprite definition itself. If you have direct access to video RAM, reading one pixel position can quickly tell if you've collided or not, giving pixel-wise accuracy not possible with something like bounding boxes. However, there are issues greatly complicating this: figuring out what you've collided with, or if your speed lands you many pixels into a graphical object, or if it is thin and you pass through it, or how to determine an angle of deflection, etc.
Using 3D graphics hardware and quads, you could potentially change render states, rendering in monochrome to an off-screen texture, yielding the 2nd collidable layer you described. Yet that texture is then resident in graphics memory, which isn't freely/easily accessible like your system memory is. And getting that data back/forth over the bus is slow. It's also costly, requiring an entire additional render pass (worst case, halving your frame rate) plus you have all that extra graphics RAM used up... all just to do something like collision-detect. Much better schemes exist, especially using data structures.
It's better to use bounding boxes, or even a hierarchy of sub-bounding boxes. After that, you can determine if you've landed on the other side of, say, a sloped line, requiring only division/addition operations. Your game already manages all the sprites you're moving, so integrate some data structures to help your collision detection. For instance, I just suggested in another thread the use of linked lists to limit the objects you must collision-detect against one another.
Ideas like yours might not always work, but your continual creative thinking will lead to ones that do. Sometimes you just have to try coding them to find out!
For my internship on Brain-Computer Interfacing I need to generate some very fast flickering squares on a CRT monitor (flickering = alternating between two colors). The monitor's refresh rate is 85Hz and we would like this to be the bottleneck, which means that repainting all squares can take at most 1000/85 = 11ms.
My language of preference for GUI/graphics programming is Java, so I tried making a prototype with AWT, because it's synchronous (unlike Swing). I now seem to have two problems: the first is that time measurements show that the repainting of even 9 squares simply takes too long. My algorithm takes the desired frequency and calculates the times at which the system should repaint in advance and then uses a loop (with no sleep/wait delay) that checks everytime if the next 'time' was reached and if so, loops through all the squares to repaint them. The way I implemented it now is that the squares are Panels with background color A and are contained in another Panel with background color B and the flickering happens because the Panels' visibility is changed. I figured that this would be faster than one Panel that has to draw Rectangles all the time.
I don't have a decent profiling tool (can't get Eclipse TPTP or NetBeans profiler to work) so I can't be sure, but I have the feeling that the bottleneck is actually not in the repainting, but in the looping (with conditional checking etc.). Can you recommend anything about what I should do?
The second problem is that it seems like the squares are rendered top-to-bottom. It's like they unroll really fast, but still visibly. This is unacceptable. What I'm wondering though, is what causes this. Is it Java/AWT, or Windows, or just me writing a slow algorithm?
Can you recommend some things for me to try? I prefer to use Java, but I will use C (or something) if I must.
I would avoid any kind of high-level "components", like JPanels and the like. Try getting a Graphics2D representing the window's contents, and use its fillRect() method.
Failing that, you could probably do this easy enough in C and OpenGL. rasonly.c is a standard template program that sets up OpenGL to work as a "rasterizer" only, i.e. 2D mode. Using this as a starting point, you should be able to get something running that draws your desired "squares" without too much trouble.
You don't describe your desired scene very well, it sounds from that equation as if you want to draw 100 squares, each having a different color. For maximum performance in OpenGL, you should draw all squares of the same color together, to minimize the "state changes" between drawing calls. This is probably a purely theoretical point though, as drawing 100 2D squares at 85 Hz really shouldn't tax OpenGL.
UPDATE: Oh, so it's been a bunch of years, and nowadays you probably need to take the above with a grain of salt and read some modern tutorial. Things have changed. Look up the Vulkan API.
(I remember a demonstration of this using the BBC micro and palette switching, though that would be 50fps rather than 85, as it was a British domestic TV)
I'd switch to jogl and use display lists. They get very high fps rates in Java.