This is a novice question about using delta time to move a sprite across the screen.
Also, I'm looking for advice as to whether or not delta time is even needed in this case...
Here is how I'm calculating delta time:
currentTime = System.currentTimeMillis();
if (lastTime == 0) {
lastTime = currentTime;
deltaTime = 0;
} else {
deltaTime = currentTime - lastTime;
lastTime = currentTime;
}
And I wanted to use this to somehow make a more fluid movement for objects in 2d space.
This is my current method to move an object up:
public void move() {
this.mPos.y -= mSpeed;
}
The thread constantly calls the move() method and it works well but the objects are slightly jittery across the screen. Any tips on how to incorporate delta time into this move() method? I'm trying to set a maximum speed of 10.
The position shouldn't move by the velocity -- the units aren't even the same! If you want to get a position out of a velocity, you need to multiply it by something with units of time (such as your deltaTime).
I imagine that your deltaTime is going to be quite small, so this should smooth the movement as well since the sprite will be moving more slowly. You'll probably need a higher velocity (or smaller position range) than what you're currently using though. If multiplying your velocity by deltaType doesn't get the results you want, I'd try adding a scaling factor and adjusting it around to see the difference it makes.
Related
I want to animate some text thats in an arraylist that gets changed often but I can't figure out how to animate it on the Y-Axis as it needs to move up and down and I've only set it up for obj.renderY += 10 * deltaTime but I need it to smoothly animate to the position wanted without overcomplciating it.
The current setup for animating the Y-Axis of the text is obj.renderY = (obj.renderY * (speed - 1) + offset) / speed and this is frame dependent and I haven't figured out how I would implement deltaTime into it properly most of the times I've tried it just breaks it. This current setup I have animates it to the position I want it to be on the Y-Axis even if its above or below the wanted position.
The offset is just equal to 2 + count * (fontHeight + 1) I don't know if that would help but sure I'll include it anyway. Thanks for taking your time and reading this post.
Example of trying to animate the texts movement:
float speed = 14;
obj.renderY = (obj.renderY * (speed - 1) + offset) / speed
The class that is used to get deltaTime (Updates every frame):
public class Time {
public static float deltaTime = 0;
private static long lastTime = System.nanoTime();
public static void update() {
long currentTime = System.nanoTime();
deltaTime = (currentTime - lastTime) / 1000000000.0F;
lastTime = currentTime;
}
}
Use javax.swing.Timer. It was literally built to solve this exact problem.
Timer(int delay, ActionListener listener)
delay - Place the amount of time you want the timer to wait before kicking things off. Please note - this does NOT mean the amount of time between each task, only the amount of time before starting each task.
listener - Place your actual task here.
Then, all you need to do is use the start() method, and that should be it. You can configure the timer in other ways, but that is the basics.
EDIT - I imagine that many people who are asking this question are game developers trying to make some part of their game run at 60 frames per second.
You can use a timer, but you might be better off using something from a game engine instead.
In short, you probably want something more feature rich than a simple timer. Timers are effective, but they can be hard to manage, and may not kick off at the exact frame that you want them to, leading to issues. Timers are meant to be a simple tool to make kicking off a task at a certain frequency easier. But it's rarely as simple as that when making a game.
I would discourage you from using this timer for anything other than a basic repetitive task that only needs to start and stop after a certain event. If you want something with precision, stick to using the tools found in some of your game engines. They specialize in getting certain actions to kick off at the exact times you want them to.
I'm making a platformer game. The jumping and movement are working, and now I want to program the collisions. I have a sort of idea as to how this will work:
I wasn't sure whether this question should be here or in the game dev site.
I have two Vector2's - velo and momentum (velo is the velocity). Both are (0, 0). The momentum is what's added to the velocity each frame.
Each frame, I first get the input. Momentum is increased and/or increased based on the keys pressed (e.g: if (Gdx.input.isKeyPressed(Keys.A)) { momentum.x -= speed; })
I then multiply the momentum by 0.15. This is so that it slows down.
After this, I multiply the velocity by 0.8.
Then, I add the momentum to the velocity, to increase the velocity, as
this is what actually moves the player.
I then add the velocity to the position of the player.
To apply the gravity, I add a gravity vector (0, -10) to the position of the player.
So I need to find a way to move the player, but not allowing it to overlap any part of the world. The world is made up of lots of Rectangle instances. The player's body is also an instance of Rectangle. Would it be easier to rewrite the collisions using Box2D? What I've tried is checking if the player overlaps any rectangles when it's moved, and if it will, not moving it. But this doesn't seem to take everything into account - sometimes it works, but other times, it stops before touching the world.
TL;DR: I want to make collisions of my player with a world which is stored as a grid of rectangles. How would I do this, as my player is also a Rectangle. Would it be easier to use Box2D?
This answer gives a good overview about the details of collision detection:
https://gamedev.stackexchange.com/a/26506
However, that might be a bit overwhelming if you just want to get a simple game going. Does your game loop happen in fixed interval or is it dependent on the framerate? Maybe you could solve a part of your issue by simply dividing the collision detection into more steps. Instead of moving the full way during one update, you could make 10 little updates that each move you only a 10th of the distance. Then you do a collision check after each step, so it's less likely that you stop too early or move through thin objects.
That would of course be more taxing on the performance, but it's a naive and easy to implement approach to a solution.
EDIT:
Just wrap the position update in a for loop. I don't know how your code for collision checking and updating, but the general structure would be something like
for (int i = 0; i < 10; i++) {
newPosX += deltaX * 0.1; // 1/10th of the usual update
newPosY += deltaY * 0.1; // 1/10th of the usual update
if (checkCollision(newPosX, newPosY))
{
posX = newPosX;
posY = newPosY;
}
else
{
break; // We hit something, no more mini-updates necessary.
}
}
I have a top down shooter using the paint method and I would like for it to work on all displays. It works by getting the resolution and dividing the x and y by 40 to separate it all up into squares.
My method of making the bullets move is by having a thread and a move method.
public void move(){
x += dx;
y += dy;
}
But if the persons computer is smaller, the bullet would move across the screen quicker. How can I get it to move at slower on smaller screens and faster on bigger screens?
Thank you for any suggestions.
What do you actually mean by slower? Do you mean as in the total time (measured in seconds) for the bullet to move across the screen is different in different devices?
Assuming that you did all calculation correctly as you described, I think you forgot one factor: different devices have different computing speed (and may be screen update speed also), so a "tick" in one device might be longer or shorted than some other. So when you call move, you should calculate how much time has passed from that last time moved() was called, and then you would calculate dx and dy based on it. Hope this make sense
I think you're forgetting not every computer runs at the same speed, if you've got this looping as fast as it can it will run very different on every computer. I suggest implementing delta scaling, this consists of timing the last frame, say you want 60fps, that means it needs to be 16 milliseconds, so take the time and divide it such as:
int lastframe = getFrameTime();
float scaler = lastFrame/(1000f/targetFrameRate)
then multiply all movements to this scale. such as:
public void move() {
x += dx * scaler;
y += dy * scaler;
}
A also see what you mean with different screen sizes being faster, this is because of the pixel density, to get this you will have to get the screen physical dimensions along with the resolution. For example if your screen is 20mm wide and its 1280x720, it's 20/1280 , this gives you the fact that each pixel is 0.015mm wide. You can then use the above scalar technique to scale them to move a physical world speed.
Though it looks funny in my game, I have some basic rts movement where you right click and the unit moves there. But sometimes when I run the game the unit moves much slower than normal but nothing has been changed in the code (other times the unit moves much faster). When I open up more programs the speed returns to normal but if i'm just running Netbeans I get super slow movement.
I have a feeling it has something to do with the delta value(not really sure) for updating but as i'm new to slick2d I don't know where to start with fixing the problem.
So my question is, can I limit the delta value so it can't update too slow or too fast and is delta even my problem?
http://pastebin.com/fRndGE2p //Main class
http://pastebin.com/KJ8W3134 //PlayerStats
I see now. You did not turned on the VSync.
Note: The VSync limits your framerate to your monitors refresh rate (usually 60fps).
Ohh and the maximum/minimum update intervals are in miliseconds.
So the following example makes the game very laggy:
app.setVSync(true); // Turn VSync
app.setMaximumLogicUpdateInterval(200); // Max. 200 miliseconds can pass
app.setMinimumLogicUpdateInterval(100); // Min. 100 miliseconds must pass
So i think you have to play around the numbers to make it optimal.
But, this is not what you need :D
I saw this:
player_X = player_X + velocityX;
player_Y = player_Y + velocityY;
So this was your code, to update the positions of the player.
You must use the delta number.
public void update(GameContainer gc, int delta)
As you can see the delta is a pre defined integer. The delta contains the time that passed between two updates. So after this you should multiply everything with delta.
Check this:
player_X += velocityX * delta;
player_Y += velocityY * delta;
// The '+=' means player_X = player_X + something (if you did not know)
Note: If the player moves slowly after the change, then simply multiply it with a number like this:
player_X += velocityX * delta * 1.5f;
player_Y += velocityY * delta * 1.5f;
Example:
The Runnable Jar
MainComponent.java
GameState.java
This is a self made fast and simple example for you. Try it out, taste the source code:D
Oh and this distance calculating method makes the player shaky.
I have problems with smooth scrolling in OpenGL (testing on SGS2 and ACE)
I created simple applications - only fixed speed horizontal scrolling of images, or only one image (player) moving by accelerator,
but it's movement is not smooth :-(
I tried many various of code but no satisfaction...
first I tried work with GLSurfaceView.RENDERMODE_CONTINUOUSLY and I put all code to onDrawFrame:
public void onDrawFrame(GL10 gl)
{
updateGame(gl, 0.017f);
drawGame(gl);
}
this is the simplest and absolute smooth!! - but it's dependent on hardware speed (= useless)
public void onDrawFrame(GL10 gl)
{
frameTime = SystemClock.elapsedRealtime();
elapsedTime = (frameTime - lastTime) / 1000;
updateGame(gl, elapsedTime);
drawGame(gl);
lastTime = frameTime;
}
this is the best of all but it's not as smooth as the previous, sometimes flick
second I tried GLSurfaceView.RENDERMODE_WHEN_DIRTY, in onDrawFrame i have only drawing objects and this code in separate Thread:
while (true)
{
updateGame(renderer, 0.017f);
mGLSurfaceView.requestRender();
next_game_tick += SKIP_TICKS;
sleep_time = next_game_tick - System.currentTimeMillis();
if (sleep_time >= 0)
{
try
{
Thread.sleep(sleep_time);
}
catch (Exception e) {}
}
else
{
Log.d("running behind: ", String.valueOf(sleep_time));
}
}
this is not smooth and it's not problem with "running behind"
My objective is smooth image movement like in first code example above.
Possible error is somewhere else then I looking. Please, Can somebody help me with this?
Is better use RENDERMODE_WHEN_DIRTY or RENDERMODE_CONTINUOUSLY?
Thank you.
I was fighting with the exact same issue for several days. The loop looks smooth without any Android time reference, but as soon it includes any type of “time sync” , external factors out of the android development control introduce serious discontinuities to the final result.
Basically, these factors are:
eglSwapInterval is not implemented in Android, so is difficult to know the moment when the hardware expose the final draw in the screen (hardware screen sync)
Thread.sleep is not precise. The Thread may sleep more or less than requested.
SystemClock.uptimeMillis()System.nanoTime(), System.currentTimeMillis() and other timing related measurement are not accurate (its precise).
The issue is independent of the drawing technology (drawing, openGL 1.0/1.1 and 2.0) and the game loop method (fixed time step, interpolation, variable time step).
Like you, I was trying Thread.sleep, crazy interpolations, timers, etc. Doesn’t matter what you will do, we don’t have control over this factors.
According with many Q&A on this site, the basic rules to produce smooth continuous animations are:
Reduce at the minimum the GC by removing all dynamic memory request.
Render frames as fast the hardware can process them (40 to 60fps is ok in most android devices).
Use fixed time steps with interpolation or variable time steps.
Optimize the update physics and draw routines to be execute in relative constant time without high peaks variance.
For sure, you made a lot of previous work before post this question by optimizing your updateGame() and drawGame() (without appreciable GC and relative constant execution time) in order to get a smooth animation in your main loop as you mention: “simple and absolute smooth”.
Your particular case with variable stepTime and no special requirements to be in perfect sync with realTime events (like music), the solution is simple: “smooth the step Time variable”.
The solution works with other game loop schemes (fixed time step with variable rendering) and is easy to port the concept (smooth the amount of displacement produced by the updateGame and the real time clock across several frames.)
// avoid GC in your threads. declare nonprimitive variables out of onDraw
float smoothedDeltaRealTime_ms=17.5f; // initial value, Optionally you can save the new computed value (will change with each hardware) in Preferences to optimize the first drawing frames
float movAverageDeltaTime_ms=smoothedDeltaRealTime_ms; // mov Average start with default value
long lastRealTimeMeasurement_ms; // temporal storage for last time measurement
// smooth constant elements to play with
static final float movAveragePeriod=40; // #frames involved in average calc (suggested values 5-100)
static final float smoothFactor=0.1f; // adjusting ratio (suggested values 0.01-0.5)
// sample with opengl. Works with canvas drawing: public void OnDraw(Canvas c)
public void onDrawFrame(GL10 gl){
updateGame(gl, smoothedDeltaRealTime_ms); // divide 1000 if your UpdateGame routine is waiting seconds instead mili-seconds.
drawGame(gl);
// Moving average calc
long currTimePick_ms=SystemClock.uptimeMillis();
float realTimeElapsed_ms;
if (lastRealTimeMeasurement_ms>0){
realTimeElapsed_ms=(currTimePick_ms - lastRealTimeMeasurement_ms);
} else {
realTimeElapsed_ms=smoothedDeltaRealTime_ms; // just the first time
}
movAverageDeltaTime_ms=(realTimeElapsed_ms + movAverageDeltaTime_ms*(movAveragePeriod-1))/movAveragePeriod;
// Calc a better aproximation for smooth stepTime
smoothedDeltaRealTime_ms=smoothedDeltaRealTime_ms +(movAverageDeltaTime_ms - smoothedDeltaRealTime_ms)* smoothFactor;
lastRealTimeMeasurement_ms=currTimePick_ms;
}
// Optional: check if the smoothedDeltaRealTIme_ms is too different from original and save it in Permanent preferences for further use.
For a fixed time step scheme, an intermetiate updateGame can be implemented to improve the results:
float totalVirtualRealTime_ms=0;
float speedAdjustments_ms=0; // to introduce a virtual Time for the animation (reduce or increase animation speed)
float totalAnimationTime_ms=0;
float fixedStepAnimation_ms=20; // 20ms for a 50FPS descriptive animation
int currVirtualAnimationFrame=0; // useful if the updateGameFixedStep routine ask for a frame number
private void updateGame(){
totalVirtualRealTime_ms+=smoothedDeltaRealTime_ms + speedAdjustments_ms;
while (totalVirtualRealTime_ms> totalAnimationTime_ms){
totalAnimationTime_ms+=fixedStepAnimation_ms;
currVirtualAnimationFrame++;
// original updateGame with fixed step
updateGameFixedStep(currVirtualAnimationFrame);
}
float interpolationRatio=(totalAnimationTime_ms-totalVirtualRealTime_ms)/fixedStepAnimation_ms;
Interpolation(interpolationRatio);
}
Tested with canvas and openGlES10 drawing with the following devices: SG SII (57 FPS), SG Note(57 FPS) , SG tab(60 FPS), unbranded Android 2.3 (43 FPS) slow emulator running on Windows XP(8 FPS). The test platform draws around 45 objects + 1 huge background (texture from 70MP source image) moving along a path specified in real physics parameters (km/h and G’s), without spikes or flick between several devices (well, 8 FPS on the emulator doesn’t look good, but its flow at constant speed as expected)
Check The graphs for how android report the time. Some times Android report a large delta time and just the next loop it's small than average, meaning an offset on the reading of realTime value.
with more detail:
How to limit framerate when using Android's GLSurfaceView.RENDERMODE_CONTINUOUSLY?
System.currentTimeMillis vs System.nanoTime
Does the method System.currentTimeMillis() really return the current time?