Are my programming methods for java games sufficient [closed] - java

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
so I'm making a 2d Java game with the slick2d and MarteEngine libraries. This is the biggest project since I made checkers for my Java class. I am still pretty new to programming concepts and using optimal ways to get stuff done.
The basic structure of my game is you are a player/hero in a zombie apocalypse and you can gather survivors to help you. There are a many areas that I'm concerned about in my programming. I'm not sure if my methods are a good choice for what I want. This game also does not currently have a grid/tile system.
I've looked at some open source java games and they don't really answer my questions to my methods. So I'm going to make a list here of what I'm uncertain about and I hope you guys can confirm/deny if my methods are appropriate. Sorry if this list gets too long, I'm thinking of the questions as I type.
Targeting/Attacking - Survivors will automatically attack zombies once they get within the gun's target range. To do this, I have every survivor get the distance (using distance formula) to every zombie and find the closest one to attack. I check for this constantly so if a faster zombie gets closer, the survivor will change targets. For zombies, they acquire a target and stick to it (for now). The zombies constantly check if they are within the attack range (around 50 pixels) using the distance formula. If they are within range, stop and attack, otherwise, move towards the target.
2D Camera - So a camera in a 2D environment moves the world around instead of you. My current method is have my zombies/survivors/any entities on the map stored in array lists. First the background is adjusted, then all the lists are cycled through and every entity's x and y values are modified. This seems to work alright but some stuff you can really notice sliding around on the background. Not really sure how to avoid this.
User Interface - I really have no clue how to work with UI. What I've been doing so far is simply using a background and then generating button objects and manually lining them up. Then, I check if the mouse is over any of the button's areas and if there is a click while moused over the button. I have three different backgrounds and buttons that I switch out with booleans. I'm going to recode that area though, using objects with the background and buttons. Is this the correct way to do UI?
Path Finding - I have no path finding system yet. Do I have to stick to a grid system? I really rather my entities move freely along the terrain and not in a weird square to square motion.
Selecting - I have it so you can select survivors, upgrade them, and other random stuff. My current method for selecting is constantly check where the mouse X and Y is. I get the distance from the mouse to every survivor and check if it is within 30 pixels. Then, I check if there is a click, if so, select the survivor and unselect all others. I'm still trying to figure out how to unselect all survivors if I click on open space. Is there a better way to go about doing this?
Picking stuff up - Same way as said before. I check the distance from the player to every item that can be picked up. If the item is within 30 pixels of the player, it picks it up. It seems to work fine for the moment I suppose. Maybe there really is no other way to do this.
Animations - I understand the how to animate with sprites, but I just want to make sure. So if I have 7 different guns to be shot, do I need to manually make functions that have precise timing on each sprite. Say if I have a shotgun, it needs a recoil, pump forward, brief pause, pump back, and ready again. For a pistol I need just the recoil really. So I'd have to make unique functions for each of these animations?
Sorry to type this long list of questions. I try to gather information on this stuff as much as possible and I haven't been able to find many examples on this stuff. I greatly appreciated any answers, even just a yes or no answer. Thanks in advance!

2D Camera:
Not exactly sure what you are doing when you say you modify each entity's position, but the way I'd do it is have a Camera object that has its own x, y, width, height and methods to move the camera around also, and then in your Draw cycle:
for (every Object on the map)
{
if (Object is within Camera bounds)
{
// Draw the Object at the Object-xy minus the Camera-xy
// This will draw the Object at its position relative to the camera
// and won't waste time drawing things that are not within camera bounds
}
}
Targeting/Attacking
You have the right idea, but checking every zombie against every survivor will take a lot of computing and (depending on how many zombies and survivors) can cause the game to slow down a lot. It's the same deal with collision detection, checking every object against every other object to see if they collide takes a lot. There are ways to not have to check everything against everything, I suggest you read into 'Spacial Partitioning'. I have not used slick2d but perhaps it has such a thing already implemented for you.
Picking stuff up
Same deal as Targeting/Attacking, if there are too many items or things that can pick up items, it will end up slowing frame rate a lot.
Selecting
This isn't way you should be doing selecting, but I myself have not had to use selection much at all so I am not completely sure of the best way, you should probably try searching around for ways to do this. Either way your current way can be improved by only checking on a click, you don't need to check every single frame, only check when there has been a click.
This is all I can help with currently, I hope it has been of some use to you at least.

Related

Android - Tracking Real-Time Touch Events to Draw

I need your help.
I am trying to implement a paint system with live-tracking, you may think: “Live-tracking? What do you mean?”
So I am implementing a solution to track each touch movement on the display, so I can decide, if I want to paint on a specific spot or not, this decision must be in real-time, right when the user touches the display.
There will be some images where the user should paint the inner white space inside of the respective shape; figure 1 shows an example of this shape.
The user should paint the inside of the shape in a specific direction/order; I still don’t know how I can configure this order/direction in the image. An example of a specific order would be like the one showing on figure 2.
So I want the user to paint starting from the zone 1 to the zone 6, do you understand? I don’t want the user to be able to paint the zone 1 and then zone 3, without painting the zone 2 first…
In the end, I would have the shape filled, like the figure 3 shows. The arrows are there just to show the right direction of drawing.
It’s not just about direction, the specific spots on the image matter, there will be different types of images, and I can’t use only the direction.
How can I determine the specific spots in an image with a specific order to be painted?
The only idea that came to my mind, would be some kind of training mode, where I (the developer) would draw the image with the order I want, and the system behind would create virtual points on the code, with a small distance between each of them. Then I would have an array with all the “spots” saved with the order I draw them, do you understand?
I don’t know if this will work, and if it will be fast enough to travel my array of “spots” when the user is trying to draw on the image? Because I want this to work on real-time, the system should check if it’s a valid spot to draw right when the user touches the display…
Do you think it will work? Do you have some recommendations about this? What are the best structures and stuff like that? I am not asking for written code, I just need some guidelines.
I think I explained my problem well, if you have any question please ask me.
Thank you very much in advance!
if all your shapes are lines you can treat the points you want your users to "touch" as squares, every time an ACTION_MOVE event is happening, check if the finger is touching the next point. You can make the size of the squares whatever suits you, like 48dp or 92dp or the width/height of the shape you are drawing.
You said you don't want any code so I assume you know how to do this (if not reply and I will post code too).
Also you didn't mention what happens if the user goes out of the shape, if you want the user to continue drawing until they lift their finger then this way is good, when the ACTION_UP event happens check if the last point is touched (assuming it can only be touched if all others are touched like explained above).
Here is an image if you'd like to have a visual guide:
Obviously if you it this way you have to calculate the positions of each point you will have, I doubt there is any way to automate this.
If you need any more help or you need some code please reply and I will post some! :D

2D graph calculator problems

Ok so I was asked to do a 2d graph calculator as a college project, I was able to do one using java swing components and rendering an array with x,y values at real time. However there are several problems with this approach:
The array has a limit to the amount of values it can hold.
Its not very good in terms of performance because it has to loop through the whole array at 60 fps or so.
My way of fixing the first problem would be to use a dynamic array list instead of a regular array, but there is still the second problem. The idea of rendering one big image and using it as a 'map' of the graph sounds like a solution however this then brings it's own complications like:
What happens when the field of view goes out of the image boundaries.
How to know what values of the graph it should render to the image.
Now then again I face another decision making, since now we are talking more advance graphics tricks I had the idea of using lwjgl as my graphics library instead of swing which made sense from the word go, so now I can use the 3d camera system to render an orthogonal view of the 2d graph. About the first problem I thought of making chunks of image so that when we leave the FOV there is still an image to see. About the second problem am stuck, because the graph works as a function of x I don't know what my y value is until the equation has been calculated so technically I could check if the y value reaches the bottom of the image and if its lower that the top of the image (however this is still not good for performance).
Now say I have resolved all of the above there is still one last problem, and that is: because I draw the graph as very little lines (two points), how do I know how small the line have to be in order to get an accurate graph yet optimized, even when the function has some really wacky results?
Thank to everyone, and I hope you can help me :)

Java More Resourceful Collision Detection

I am making a game in java which involves characters moving around a map and having some solid collision objects (i.e. buildings) placed around the map by reading certain data from a text file. There will be multiple maps where these objects' locations will change. My question is would painting a rectangle in a certain color that indicates collision behind such structures or would reading mouse coordinates and searching an array of these structures to see if that point lies on a building, thus denying the move or altering, be more resourceful and/or quicker. If painting a rectangle is the best, would leaving it behind the structure or deleting it after detecting for collision be better. Thanks for your time!
In my junior year in college I worked on a Collision detection system algorithm for the windows phone. It is hardly perfect but it was EXTREMELY efficient and can be adapted to a majority of games.
The way that it worked was pretty simple. There were two types of objects; Collidable objects (such as enemies or buildings) and Objects that you wish to check for collisions with these collidable objects.
I had this idea when I was going through a data structures class and we spoke about Linked Lists. I thought what if each link was a collidable object that you could stick your game objects that were already created in it. Then as the game objects moved around you would have a lightweight way of checking their locations for collisions. Thus my system was born.
All it really is, is a class that fires off either every game cycle or when ever you choose to check for collisions. You give it your players location, or bullet location or what ever object you want to see if it is colliding with something and it searches all of the collidable object locations and conducts test to see if they are overlapping.
The real efficiency of it comes into play when you add in a second element (Locations AND quadrant)
For Example if I break the phone screen up into for parts and I know which quadrant my player or bullet is in I can choose to only scan a list of collidable objects that are within that quadrant. Thus cutting your search algorithm to a fourth of its origonal size.
There are many different ways of detecting collisions. This was a simple example I used in my class to show how you could detect two circles colliding that were actually squares. As you can see simply by taking the center point coords of the circles and the radius's you can calculate the hypotenuse and determine where or if they are touching.
Good luck! if you have any questions feel free to ask!
The last reply in this posting may help you out. It is a simple maze. The structure of the maze is controlled by a data file which simply contains 0, 1 to indicate a path or a wall. You navigate through the maze using the arrow keys. When an arrow key is pressed the code checks to make sure the next square is not a wall.

How to learn mouse movement?

I've been attempting to develop a means of synthesizing human-like mouse movement in an application of mine for the past few weeks. At the start I used simple techniques like polynomial and spline interpolation, however even with a little noise the result still failed to appear sufficiently human-like.
In an effort to remedy this issue, I've been researching into ways of applying machine learning algorithms on real human mouse movement biometrics in order to synthesize mouse movements by learning from recorded real human ones. Users would be compiling a profile of recorded movements that would trainh= the program for synthesis purposes.
I've been searching for a few weeks and read several articles on application of inverse biometrics in generating mouse dynamics, such as Inverse Biometrics for Mouse Dynamics; they tend to focus, however, on generating realistic time from randomly-generated dynamics, while I was hoping to generate a path from specifically A to B. Plus, I still need to actually need to come up with a path, not just a few dynamics measured from one.
Does anyone have a few pointers to help a noob?
Currently, testing is done by recording movements and having I and several other developers watch the playback. Ideally the movement will be able to trick both an automatic biometric classifier, as well as a real, live, breathing Homo sapien, too.
Fitt's law gives a very good estimation of the time needed to position the mouse pointer. In the derivation section there is a simple explanation I think you could use this as one of the basic building blocks of your app. Start with big movements, put some inacurracy both in the direction and the length of the movement, then do a smaller correction movement and so on...
First, i guess you record human mouse movements from A to B. Because otherwise, trying to synthesize a model for such movement does not seem possible to me.
Second, how about measuring the deviations from the "direct" path, maybe in relation to time. I actually suspect that movements look different for different angles, path lengths etc., but maybe you can try a normalized model first, that you just stretch (in space and time) and rotate like you need it.
Third, the learning. The easiest thing would be to just have a collection of real moves (in the form i discussed above), and sample from that collection. Evaluate how that looks like. If you really want a probabilistic model, then you have to evaluate what kind of models fit. is it enough to blurr the direct path with gaussian noise whose parameters you learn from your training set? Or some (sin-)wavy deviation? Or seperate models for "getting near the button" and "final corrections". Fitts law might be useful for evaluation.
This question reminded me of a website I knew about years ago, so I visited it and found this in-depth discussion on the topic.
The timing is so similar as to make me think this question is related in some way. In fact, someone in the thread linked to the same article you did. If it's not related, well, there's a link to a lot of people discussing exactly what you're thinking about.
I don't think the problem is all that well defined. There is a important notion not mentioned so far, which is context. The mouse movement on my screen when Chrome has focus is massively different that the motion when Vim has focus.
The way a mouse moves varies based on the type of the device, the type of action, the UI elements involved, familiarity with the UI, the speed at which the user is attempting to complete their task, the skill of the user, initial failure of the user (eg miss-clicks), the user's emotional state (as well as many other factors). Do you plan on creating several pathing strategies to correspond to different contexts? Also how well do you know the algorithm you are trying to fool? I assume not extensively or you would simply program directly against that algorithm.
If a human is looking at the pathing, they might be able to identify the state associated with a pathing strategy and may be more inclined to be fooled if they identify it as a human state (eg user is rushing, miss-clicks, quickly closes a resulting popup, tries again slower). UI comes into play with not just size and position. I often quickly point to a toolbar, then slide across the options until I get to my target. Another example is that I typically pause on menu items while I am scanning for my target or hover over text I am reading. Are you attempting to emulate human behavior or just their mouse movements (because I think they are joined at the hip)?
Are you wanting to simulate human-like mouse movement because you are doing real-time online training for your game? If your training sequences are static, just record your mouse movements and play a mouse clicking sound effect whenever you click the mouse button. No mouse movement is going to feel "real enough" to you more than your own.
Personally, I feel experts in software move their mice too quickly in training videos. I prefer an approach taken by screencast video software I've seen that always moves the mouse linearly from point A --> B. The trick was, every mouse move made in the video always took the same amount of time regardless of distance, say 3/4 of a second and then followed by a mouse click sound effect.
I believe they moved the mouse in this way because then the viewer could anticipate the landing area of the mouse by the direction and velocity the mouse moved at the start. In a training situation, I suppose that regular movements like this are gentler on the eye and perhaps easier to retain/recall.
Have you considered adding mouse tracking to your application so you essentially record how the user moves the mouse and then analyze the recordings?
I have not looked into this recently but I believe that a MouseListener in a Swing application get the information you need.

More Efficient Method of Drawing Thousands of Particles (Java/Android)

So I'm writing a sort of particle simulator, like a "falling sand game" if you know what that is, and I've kind of hit a roadblock now. The way I'm doing this is I have a particle object that basically as of now has an position (int x, int y) and that's it. The way I'm drawing/moving them, is with a thread and the onDraw event for an android panel. Each time onDraw is called I loop through all the particles, move them down one pixel unless they hit the bottom and then draw them, this is pretty smooth until I get to about 200 particles, then the fps drops significantly. I know this is computation heavy the way I'm doing it, there's no debate about it, but is there any way I could do this to allow a lot more particles to be drawn and with less lag?
Thanks in advance.
I take it you're using an individual-pixel drawing function for this? That would indeed be slow.
I see a couple ways to improve it. First is to put the pixels into an in-memory bitmap then draw the whole bitmap at the same time. Second, since particles are always just going down one pixel, you can scroll part of the bitmap instead of replotting everything. If Android doesn't have a scroll then just draw the bitmap one pixel down and start a new bitmap for the particles above the scroll. You'll have to fix up the particles on the bottom, but there are fewer of those.
I've never done things like this before, but I have done some complex cellular automata. Sorry if this is too vague.
The basic idea here is to mark all particles that should "keep falling" or "not move" and exclude them from complex processing (with a special short/fast processor for the "falling" list - all you need to do is drop each one by a pixel).
The acceleration for non-moving particles - static particles (I'll call them S particles), is that they don't move. Mark it for all non-moving regions (like a gravity-immune "wall" or "bowl" that a user might make. Mark particles above it S if they are stable, so for example for liquid, if it has S particles under, and to both sides of itself, it will not move. For something like sand that forms piles, if it has an S in each of the three spots under it, it makes a pile, you'll get nice 45-degree piles like this, I'm sure you can change it to make some things form steeper, or less-steep piles. Do S mapping bottom-up.
The acceleration for particles with no particle under them is falling - F particles. Particles with an F particle under them are also F particles. Mark these bottom-up as well.
Particles unmarked F or S are complex, they may start falling, stop falling, or roll, use the slow processor, which you already have, to deal with them, there shouldn't be many.
In the end what you will have is many many fast particles. Those in a pile/lake and those raining down. The leftover particles are those on the edge of slopes, on the top of lakes, or in other complex positions. There shouldn't be nearly as many as there will be fast particles.
Visually mark each kind of particle with some colour, complex particles being bright red. Find cases where it is still slow, and see what other kinds of fast processors you should make. For example you may find that making lots of piles of sand creates lots of red areas along slopes, you may want to invest in speeding up "rolling zones" along the slopes of piles.
Hope it makes sense. Don't forget to come back and edit once you've figured something out!
You may want to look into OpenGL ES hardware acceleration and renderscript. It doesn't give you a more efficient solution code wise (see the other answers for that). It does open up a lot more processing power for you to use however. You can even run the entire simulation on the GPU (possibly, don't know your implementation details).
Edit
Also, if you still decide to do the processing in java, you should look at Method Profiling in DDMS. This will help you visualize where your performance bottlenecks are.
If you blur your image a bit, then you could just move half particule at a time, maybe one fourth only and print them all.. that would cut computation and the user wouldn't see it, getting the feeling all particules move.
But what ever you choose, I think you should be put a strong limit, not all users have powerfull android devices.
Regards,
stéphane
I think if particles are close each other, you can create objects that represent 3 or more particles.
When displaying several particles on screen, sets of grains maybe gets unnoticed.

Categories