So I'm writing a sort of particle simulator, like a "falling sand game" if you know what that is, and I've kind of hit a roadblock now. The way I'm doing this is I have a particle object that basically as of now has an position (int x, int y) and that's it. The way I'm drawing/moving them, is with a thread and the onDraw event for an android panel. Each time onDraw is called I loop through all the particles, move them down one pixel unless they hit the bottom and then draw them, this is pretty smooth until I get to about 200 particles, then the fps drops significantly. I know this is computation heavy the way I'm doing it, there's no debate about it, but is there any way I could do this to allow a lot more particles to be drawn and with less lag?
Thanks in advance.
I take it you're using an individual-pixel drawing function for this? That would indeed be slow.
I see a couple ways to improve it. First is to put the pixels into an in-memory bitmap then draw the whole bitmap at the same time. Second, since particles are always just going down one pixel, you can scroll part of the bitmap instead of replotting everything. If Android doesn't have a scroll then just draw the bitmap one pixel down and start a new bitmap for the particles above the scroll. You'll have to fix up the particles on the bottom, but there are fewer of those.
I've never done things like this before, but I have done some complex cellular automata. Sorry if this is too vague.
The basic idea here is to mark all particles that should "keep falling" or "not move" and exclude them from complex processing (with a special short/fast processor for the "falling" list - all you need to do is drop each one by a pixel).
The acceleration for non-moving particles - static particles (I'll call them S particles), is that they don't move. Mark it for all non-moving regions (like a gravity-immune "wall" or "bowl" that a user might make. Mark particles above it S if they are stable, so for example for liquid, if it has S particles under, and to both sides of itself, it will not move. For something like sand that forms piles, if it has an S in each of the three spots under it, it makes a pile, you'll get nice 45-degree piles like this, I'm sure you can change it to make some things form steeper, or less-steep piles. Do S mapping bottom-up.
The acceleration for particles with no particle under them is falling - F particles. Particles with an F particle under them are also F particles. Mark these bottom-up as well.
Particles unmarked F or S are complex, they may start falling, stop falling, or roll, use the slow processor, which you already have, to deal with them, there shouldn't be many.
In the end what you will have is many many fast particles. Those in a pile/lake and those raining down. The leftover particles are those on the edge of slopes, on the top of lakes, or in other complex positions. There shouldn't be nearly as many as there will be fast particles.
Visually mark each kind of particle with some colour, complex particles being bright red. Find cases where it is still slow, and see what other kinds of fast processors you should make. For example you may find that making lots of piles of sand creates lots of red areas along slopes, you may want to invest in speeding up "rolling zones" along the slopes of piles.
Hope it makes sense. Don't forget to come back and edit once you've figured something out!
You may want to look into OpenGL ES hardware acceleration and renderscript. It doesn't give you a more efficient solution code wise (see the other answers for that). It does open up a lot more processing power for you to use however. You can even run the entire simulation on the GPU (possibly, don't know your implementation details).
Edit
Also, if you still decide to do the processing in java, you should look at Method Profiling in DDMS. This will help you visualize where your performance bottlenecks are.
If you blur your image a bit, then you could just move half particule at a time, maybe one fourth only and print them all.. that would cut computation and the user wouldn't see it, getting the feeling all particules move.
But what ever you choose, I think you should be put a strong limit, not all users have powerfull android devices.
Regards,
stéphane
I think if particles are close each other, you can create objects that represent 3 or more particles.
When displaying several particles on screen, sets of grains maybe gets unnoticed.
Related
For my work I had to get into OpenGL 3d rendering recently, and I admit I'm quite new to this topic.
Without getting into too much detail, I have to deal with a HUGE array of data (vertices) from which I need to draw a shape. Basically, think of a plane of a very odd shape in 3d space. This shape is being added to on the fly. Think of a car moving on a plane and painting it's trail behind it - but not just a simple trail, but with holes, discarded sections, etc. And it generates a new section several times per second for hours.
So, obviously, what you end up with is A LOT of vertices, that do get optimized somewhat, but not enough. Millions of them.
And obviously I can't just feed it to a GPU of embedded system as a vertex VBO.
So I've been reading about culling and clipping, and as far as I understand I only need to display the visible triangles of this array, and not render everything else.
Now, how do I do that properly?
The simplest brute-force solution would be to go through all triangles, and if they lie outside of frustum - just not draw them. Generate a buffer of what I DO draw and pass it to GPU
One idea I had is to divide world space into squares, a kind of chunks, and basically split the "trail" mesh between them. So each square will hold data for it's part of the trail, and then I could use frustum culling, maybe, to decide which squares to render and which to skip.
But I'm not convinced it's a great solution. I also read that you should reduce the number of GL function calls as much as possible, and calling it for hundreds of squares doesn't seem great.
So I decided to ask for advice among people who would understand the subject better then me. Sadly, I don't get much learning time - I need to dive right into it.
If anyone could give me some directed tips it'd be appreciated.
You'd be better off using some form of spatial partitioning tree (e.g. OctTree, QuadTree, etc). That's a similar approach to your second suggestion, however because it's hierarchical, searching the tree is O(logN) vs O(n).
Similar to the game Factorio im trying to create "3D" terrain but of course in 2D Factorio seems to do this very well, creating terrain that looks like this
Where you can see edges of terrain and its very clearly curved. In my own 2D game Ive been trying to think of how to do the same thing, but all the ways I can think of seem to be slow or CPU intensive. Currently my terrain looks like this:
Simply 2D quads textured and drawn on screen, each quad is 16x16 (except the water thats technically a background but its not important now), How could I even begin to change my terrain to look more like a Factorio or other "2.5D" games, do they simply use different textures and check where the tile would be relative to other tiles? Or do they take a different approach?
Thanks in advance for your help!
I am a Factorio dev but I have not done this, so I can only tell you what I know generally.
There is a basic way to do it and then there are optional improvements.
Either way you will need two things
Set of textures for every situation you want to handle
Set of rules "local topology -> texture"
So you have your 2d tile map, and you move a window across it and whenever it matches a pattern, you apply an appropriate texture.
You probably wouldn't want to do that on the run in every tick, but rather calculate it all when you generate the map (or map segment - Factorio generates new areas when needed).
I will be using your picture and my imba ms paint skills to demonstrate.
This is an example of such rule. Green is land, blue is water, grey is "I don't care".
In reality you will need a lot of such rules to cover all cases (100+ I believe).
In your case, this rule would apply at the two highlighted spots.
This is all you need to have a working generator.
There is one decision that you need to make here. As you can see, the shoreline runs inside the tile, not between tiles. So you need to chose whether it will run through the last land tile, or the last water tile. The picture can therefore be a result of these two maps (my template example would be from the left one):
Both choices are ok. In fact, Factorio switched from the "shoreline on land" on the left to the "shoreline on water" on the right quite recently. Just keep in mind that you will need to adjust the walking/pathfinding to account for this.
Now note that the two areas matched by the one pattern in the example look different. This can be a result of two possible improvements that make the result nicer.
First is that for one case you can have more different textures and pick a random one. You will need to keep that choice in the game save so that it looks the same after load.
Another one is more advanced. While the basic algorithm can already give you pretty good results, there are things it can't do.
You can use larger templates and larger textures that span over several tiles. That way you can draw larger compact pieces of the terrain without being limited by the fact that all the tiles need to be connectable to all (valid) others.
The example you provided are still 2D textures (technically). But since the textures themselves are 'fancy 3D', they appear to be 3D/2D angled.
So your best bet would be to upgrade your textures. (and add shadow to entities for extra depth).
Edit:
The edges you asked about are probably layed-out by checking if a 'tile' is an edge, and if so it adds an edge-texture on top the background. While the actual tile itself is also a flat image (just like the water). Add some shadow afterwards and the 3D illusion is complete.
I hope this answers your question, otherwise feel free to ask clarification.
Im making a game in Java with a few other people but we are stuck on one part of it, making the collision detection. The game is an RPG and I know how to do the collision detection with the characters using Rectangles, but what I dont know how to do is the collision detection for the maps. What I mean by that is like so the character cant walk over trees or water and that stuff but using rectangles doesnt seem like the best option here.
Well to explain what the game maps are gonna look like, here is an example http://i980.photobucket.com/albums/ae287/gordsmash/7-8.jpg
Now I could use rectangles to get bounds and stop the player from walking over the trees and water but that would take a lot of them.
But is there another easier way to prevent the player from walking over the trees and obstacles besides using Rectangles?
Here's a simple way but it uses more memory and you do the work up front... just create a background collision mask that denotes the permissible areas for characters to walk on in a binary form. You can store that in some sort of compressed bitmap form. The lookup then is very simple and very quick.
Rectangle collision detection seems to make sense; However, alternatively you may also try sphere-sphere collision detection, which can detect collision much quicker. You don't even need a square root for distance computations since you can compare the squared distances to see if the spheres overlap. This is a very fast method, and given the nature of your game could work very well.
ALSO! Assuming you have numerous tiles which you are colliding against, consider some method of spacial partitioning. Let me give you an easy example - subdivide your map into several rectangles (http://www.staff.ncl.ac.uk/qiuhua.liang/Research/Pic_research/mine_grid.jpg) and then depending on which rectangular area your player is currently residing in - check collision only against the tiles which are located within that area.
You may take it a step further - if you have more tiles in any given area than the threshold that you set - subdivide that area further to make more smaller areas within it.
The idea behind such subdivision is called Quadtree, and there is a huge quantity of papers and tutorials on the subject, you'll catch on very quickly.
Please let me know if you have any questions.
There are many solutions to this type of problem, but for what you're doing I believe the best course of action would be to use a tile engine. This would have been commonly used in similar games in the past (think any RPG on the SNES) and it provides you with a quick and easy means of both level/map design and collision detection.
The basic concept of a tile engine is that objects are stored in a 2D array and when your player (or any other moving game entity) attempts to move into a new tile you perform a simple check to see if the object in that tile is passable or not (for instance, if it's grass, the player may move; if it's a treasure chest, the player cannot move). This will greatly simplify checking for collisions (as a naive check of a list of entities will have O(n^2) performance). This picture might give you an idea of what I'm talking about. The lines have been added to illustrate a point, but of course when you're playing the game you don't actively think of everything as being composed of individual 32x32 pixel tiles.
While I don't personally have any experience with tile engines in Java, it looks like Mappy supports Java, and I've heard good things about PulpCore. You're more than welcome to create your own engine, of course, but you have to decide if your effort is better spent reinventing the wheel (but, of course, it will be your wheel then, and that is rather satisfying) or spend your time making a better game.
I'm trying to develop side scrolling game for android involving many many textures so I was thinking if I could create a separate layer, all a single unique color (very similar to a green screen effect) make a collidable and make it invisible to the player.
(foreground layer) visual Image
(2nd layer)collidable copy of foreground layer with main character
(3rd layer)Background image
I not sure if this is possible or how to implement it efficiently, the idea just came to me randomly one day.
Future regards, Thanks
I assume your game is entirely 2D, using either bit-blits or quads (two 3D triangles always screen-aligned) as sprites. Over the years there have been lots of schemes for doing collision detection using the actual image data, whether from the background or the sprite definition itself. If you have direct access to video RAM, reading one pixel position can quickly tell if you've collided or not, giving pixel-wise accuracy not possible with something like bounding boxes. However, there are issues greatly complicating this: figuring out what you've collided with, or if your speed lands you many pixels into a graphical object, or if it is thin and you pass through it, or how to determine an angle of deflection, etc.
Using 3D graphics hardware and quads, you could potentially change render states, rendering in monochrome to an off-screen texture, yielding the 2nd collidable layer you described. Yet that texture is then resident in graphics memory, which isn't freely/easily accessible like your system memory is. And getting that data back/forth over the bus is slow. It's also costly, requiring an entire additional render pass (worst case, halving your frame rate) plus you have all that extra graphics RAM used up... all just to do something like collision-detect. Much better schemes exist, especially using data structures.
It's better to use bounding boxes, or even a hierarchy of sub-bounding boxes. After that, you can determine if you've landed on the other side of, say, a sloped line, requiring only division/addition operations. Your game already manages all the sprites you're moving, so integrate some data structures to help your collision detection. For instance, I just suggested in another thread the use of linked lists to limit the objects you must collision-detect against one another.
Ideas like yours might not always work, but your continual creative thinking will lead to ones that do. Sometimes you just have to try coding them to find out!
For my internship on Brain-Computer Interfacing I need to generate some very fast flickering squares on a CRT monitor (flickering = alternating between two colors). The monitor's refresh rate is 85Hz and we would like this to be the bottleneck, which means that repainting all squares can take at most 1000/85 = 11ms.
My language of preference for GUI/graphics programming is Java, so I tried making a prototype with AWT, because it's synchronous (unlike Swing). I now seem to have two problems: the first is that time measurements show that the repainting of even 9 squares simply takes too long. My algorithm takes the desired frequency and calculates the times at which the system should repaint in advance and then uses a loop (with no sleep/wait delay) that checks everytime if the next 'time' was reached and if so, loops through all the squares to repaint them. The way I implemented it now is that the squares are Panels with background color A and are contained in another Panel with background color B and the flickering happens because the Panels' visibility is changed. I figured that this would be faster than one Panel that has to draw Rectangles all the time.
I don't have a decent profiling tool (can't get Eclipse TPTP or NetBeans profiler to work) so I can't be sure, but I have the feeling that the bottleneck is actually not in the repainting, but in the looping (with conditional checking etc.). Can you recommend anything about what I should do?
The second problem is that it seems like the squares are rendered top-to-bottom. It's like they unroll really fast, but still visibly. This is unacceptable. What I'm wondering though, is what causes this. Is it Java/AWT, or Windows, or just me writing a slow algorithm?
Can you recommend some things for me to try? I prefer to use Java, but I will use C (or something) if I must.
I would avoid any kind of high-level "components", like JPanels and the like. Try getting a Graphics2D representing the window's contents, and use its fillRect() method.
Failing that, you could probably do this easy enough in C and OpenGL. rasonly.c is a standard template program that sets up OpenGL to work as a "rasterizer" only, i.e. 2D mode. Using this as a starting point, you should be able to get something running that draws your desired "squares" without too much trouble.
You don't describe your desired scene very well, it sounds from that equation as if you want to draw 100 squares, each having a different color. For maximum performance in OpenGL, you should draw all squares of the same color together, to minimize the "state changes" between drawing calls. This is probably a purely theoretical point though, as drawing 100 2D squares at 85 Hz really shouldn't tax OpenGL.
UPDATE: Oh, so it's been a bunch of years, and nowadays you probably need to take the above with a grain of salt and read some modern tutorial. Things have changed. Look up the Vulkan API.
(I remember a demonstration of this using the BBC micro and palette switching, though that would be 50fps rather than 85, as it was a British domestic TV)
I'd switch to jogl and use display lists. They get very high fps rates in Java.