Physics updates different with different fps - java

Before adding the detune variable, the physics update on a computer and a smartphone was strikingly different.
After adding and multiplying some variables, it turned out to smooth out the difference, but not make it completely the same.
In this regard, I ask for help, because I cannot figure out what to do myself.
public void update(float dt, Camera cam){
float detune=dt/0.01666f;
if(!ignoreGraviry)
attraction.add(getGravity().cpy().scl(detune));
float lx=1-.09f*detune;
float ly=1-.015f*detune;
attraction.scl(lx,ly);
Vector v=getMotion().scl(lx,ly).cpy();
lastPos=getPosition().cpy();
getPosition().add(
v.rotate(cam.getRotation()).add(
attraction.cpy().rotate(cam.getRotation())
).scl(dt));
}

Problem
What detune does is to scale up simulation cycle effects by a factor of 60. So basically, instead of having to simulate 60 cycles, this only has to simulate 1 cycle. But the results will be more inaccurate, maybe only a bit, maybe a lot, depending on if the rest of the simulation outside is stable/converging or not. Also with lx and ly, the way this detune is done LOOKS awfully bad (it MIGHT be OK with some outside knowlegde that your question does not provide), because you should never combine linear scaling effects with addition. This will throw you into hell's pit faster than you can imagine. lx for example will take negative numbers or positive, depending on dt. dt usually is the 'delta time' and lets you adjust the granularity vs speed of the simulation. So if someone adjusts dt and all of a sudden the simulation runs backwards, this will become a real sore issue.
Solution
You should NOT have detune in your code like this. Better increase the dt value. Ensure that calculation cycles have the same temporal distance on PC and Smartphones, like 30 times a second (30 fps, dt=33ms) and sleep the rest of the time. If you cannot guarantee that, simulation results will always differ between them, bringing advantages or disadvantages to either.
I do not know if libgdx has a fixed simulation-graphics-cycle, so exactly one simulation per graphics update. But in most engines (yes, especially games, that's why multithread/-core is usually useless there) they are heavily coupled, which - in modern programming languages - is really bad because then you'd have to restrict your simulation algorithm AND graphics updates to the lowest hardware for BOTH PCs AND phones, AND restricting them to both the worst graphical AND computational minimum requirements.
If you can decouple simulation and graphics, you'd only have to consider the lowest computational capabilities. Concerning graphics, you could always run the max frame rate on each system (or limit to 90fps, only very few people have a higher acuity), making the best of the graphics hardware, getting the smoothest rendering.

Related

How to trace an object's path in libgdx?

I have recently been working on a program which simulates gravitational interactions between planets. The physics have been implemented, and the program is fully functional.
However, the next feature to be added is the tracing of the planet's path, and I cannot think of a good way to do this.
The first way I thought of was to use particle effects, however that would require an inordinate amount of particles, and I believe that would bring very low performance.
The other way is by drawing a line between each position a planet has been through. However, this results in the number of lines drawn to rise to unreasonable high number, and thus performance greatly suffers, and in some cases, even by 1000%. Even when limiting the number of lines to a relatively low number of 300 per orbit, performance is lowered dramatically.
Thank you for your help.

Libgdx game logic in Render?

I'm learning Libgdx and have some questions about updating my game logic during the render method..
I would ideally like to keep my game logic and my render separate. The reason for this is if i have high FPS on a system my game loop would "run" faster.
what i am looking for is to keep the experance constant and possibily Limit my updates..if any one can point me towards a tutorial on how to
a)Limit my render updates via DeltaTime
b)Limit my game logic updates via Deltatime.
Thank you :)
After re-reading your question, I think the trick that you are missing (based on your comment that running on a higher-refresh system would result in your game logic running faster), is that you actually scale your updates based on the "delta" time that is passed to render. Andrei Bârsan mentions this above, but I thought I'd elaborate a bit on how delta is used.
For instance, within my game's render(), I first call my entityUpdate(delta), which updates and moves all of the objects in my game scaled by the distance traveled in time "delta" (it doesn't render them, just moves their position variables). Then I call entityManageCollisions(delta), which resolves all of the collisions caused by the update, then I finally call entityDraw(batch, delta), which uses delta to get the right frames for sprite animations, and actually draws everything on the screen.
I use a variant of an Entity/Componet/System model so I handle all of my entities generically, and those method calls I mention above are essentially "Systems" that act on Entities with certain combinations of components on them.
So, all that to say, pass delta (the parameter passed into render()) into all of your logic, so you can scale things (move entities the appropriate distance) based on the amount of time that has elapsed since the last call. This requires that you set your speeds based on units / second for your entities, since you're passing in a value to scale them by that is a fraction of a second. Once you do it a few times, and experiment with the results, you'll be in good shape.
Also note: This will drive you insane in interactive debug sessions, since the delta timer keeps accumulating time since the last render call, causing your entities to fly across the whole screen (and beyond -- test those boundaries for you!) since they generally get sub-second updates, but may wind up getting passed 30 seconds (or however long you spent looking at things stepping through the debugger), so at the very top of my render(), I have a line that says delta = 0.016036086f; (that number was a sample detla from my dev workstation, and seems to give decent results -- you can capture what your video system's typical delta is by writting it to the console during a test run, and use that value instead, if you like) which I comment out for builds to be deployed, but leave un-commented when debugging, so each frame moves the game forward a consistent amount, regardless of how long I spend looking at things in the debugger.
Good luck!
The answer so far isn't using parallel threads - I've had this question myself in the past and I've been advised against it - link. A good idea would be to run the world update first, and then skip the rendering if there isn't enough time left in the frame for it. Delta times should be used nevertheless to keep everything going smooth and prevent lagging.
If using this approach, it would be wise to prevent more than X consecutive frame skips from happening, since in the (unlikely, but possible, depending on how much update logic there is compared to rendering) case that the update logic lasts more than the total time allocated for a frame, this could mean that your rendering never happens - and that isn't something that you'd want. By limiting the numbers of frames you skip, you ensure the updates can run smoothly, but you also guarantee that the game doesn't freeze when there's too much logic to handle.

How do I work with "delta" in Slick2D/LWJGL or game programming in general?

All I know is that delta relates somehow to adapting to different frame rates, but I'm not sure exactly what it stands for and how to use it in the math that calculates speeds and what not.
Where is delta declared? initialized?
How is it used? How are its values (min,max) set?
It's the number of milliseconds between frames. Rather than trying to build your game on a fixed number of milliseconds between frames, you want to alter your game to move/update/adjust each element/sprite/AI based on how much time has passed since the last time the update method has come around. This is the case with pretty much all game engines, and allows you to avoid needing to change your game logic based on the power of the hardware on which you're running.
Slick also has mechanisms for setting the minimum update times, so you have a way to guarantee that the delta won't be smaller than a certain amount. This allows your game to basically say, "Don't update more often than every 'x' milliseconds," because if you're running on powerful hardware, and have a very tight game loop, it's theoretically possible to get sub-millisecond deltas which starts to produce strange side effects, such as slow movement, or collision detection that doesn't seem to work the way you expect.
Setting a minimum update time also allows you to minimize recalculating unnecessarily, when only a very, very small amount of time has passed.
Have a read of the LWJGL timing tutorial found here. Its not strictly slick but will explain what the delta value is and how to use it.

Some signal processing /FFT questions

I need some help confirming some basic DSP steps. I'm in the process of implementing some smartphone accelerometer sensor signal processing software, but I've not worked in DSP before.
My program collects accelerometer data in real time at 32 Hz. The output should be the principal frequencies of the signal.
My specific questions are:
From the real-time stream, I am collecting a 256-sample window with 50% overlap, as I've read in the literature. That is, I add in 128 samples at a time to fill up a 256-sample window. Is this a correct approach?
The first figure below shows one such 256-sample window. The second figure shows the sample window after I applied a Hann/Hamming window function. I've read that applying a window function is a typical approach, so I went ahead and did it. Should I be doing so?
The third window shows the power spectrum (?) from the output of an FFT library. I am really cobbling together bits and pieces I've read. Am I correct in understanding that the spectrum goes up to 1/2 the sampling rate (in this case 16 Hz, since my sampling rate is 32 Hz), and the value of each spectrum point is spectrum[i] = sqrt(real[i]^2 + imaginary[i]^2)? Is this right?
Assuming what I did in question 3 is correct, is my understanding right that the third figure shows principal frequencies of about 3.25 Hz and 8.25 Hz? I know from collecting the data that I was running at about 3 Hz, so the spike at 3.25 Hz seems right. So there must be some noise other other factors causing the (erroneous) spike at 8.25 Hz. Are there any filters or other methods I can use to smooth away this and other spikes? If not, is there a way to determine "real" spikes from erroneous spikes?
Making a decision on sample size and overlap is always a compromise between frequency accuracy and timeliness: the bigger the sample, the more FFT bins and hence absolute accuracy, but it takes longer. I'm guessing you want regular updates on the frequency you're detecting, and absolute accuracy is not too important: so a 256 sample FFT seems a pretty good choice. Having an overlap will give a higher resolution on the same data, but at the expense of processing: again, 50% seems fine.
Applying a window will stop frequency artifacts appearing due to the abrupt start and finish of the sample (you are effectively applying a square window if you do nothing). A Hamming window is fairly standard as it gives a good compromise between having sharp signals and low side-lobes: some windows will reject the side-lobes better (multiples of the detected frequency) but the detected signal will be spread over more bins, and others the opposite. On a small sample size with the amount of noise you have on your signal, I don't think it really matters much: you might as well stick with a Hamming window.
Exactly right: the power spectrum is the square-root of the sum of the squares of the complex values. Your assumption about the Nyquist frequency is true: your scale will go up to 16Hz. I assume you are using a real FFT algorithm, which is returning 128 complex values (an FFT will give 256 values back, but because you are giving it a real signal, half will be an exact mirror image of the other), so each bin is 16/128 Hz wide. It is also common to show the power spectrum on a log scale, but that's irrelevant if you're just peak detecting.
The 8Hz spike really is there: my guess is that a phone in a pocket of a moving person is more than a 1st order system, so you are going to have other frequency components, but should be able to detect the primary. You could filter it out, but that's pointless if you are taking an FFT: just ignore those bins if you are sure they are erroneous.
You seem to be getting on fine. The only suggestion I would make is to develop some longer time heuristics on the results: look at successive outputs and reject short-term detected signals. Look for a principal component and see if you can track it as it moves around.
To answer a few of your questions:
Yes, you should be applying a window function. The idea here is that when you start and stop sampling a real-world signal, what you're doing anyway is applying a sharp rectangular window. Hann and Hamming windows are much better at reducing frequencies you don't want, so this is a good approach.
Yes, the strongest frequencies are around 3 and 8 Hz. I don't think the 8 Hz spike is erroneous. With such a short data set you almost certainly can't control the exact frequencies your signal will have.
Some insight on question 4 (from staring at accelerometer signals of people running for months of my life):
Are you running this analysis on a single accelerometer axis channel or are you combining them to create the magnitude of acceleration? If you are interested in the overall magnitude of acceleration of signal, then you should combine x y z such as mag_acc = sqrt((x - 0g_offset)^2 + (y - 0g_offset)^2 + (z - 0g_offset)^2). This signal should be at 1g when the device is still. If you are only looking at a single axis, then you will get components from the dominant running motion and also from the orientation of the phone changing contributing to your signal (because the contribution from gravity will be transitioning around). So if the phone orientation is moving around while you are running from how you are holding it, it can contribute a significant amount to the signal, but the magnitude will not show the orientation changes as much. A person running should have a really clean dominant frequency at the persons step rate.

design a test program: increase/decrease cpu usage

I am trying to design and then write a java program that can increase/decrease CPU usage. Here is my basic idea: write a multi-thread program. And each thread does float point calculation. Increase/Decrease cpu usage through adding/reducing threads.
I am not sure what kinds of float point operations are best in this test case. Especially, I am gonna test VMWare virtual machine.
You can just sum up the reciprocals of the natural numbers. Since this sum doesn't converge, the compiler will not dare to optimize it away. Just make sure that the result is somehow used in the end.
1/1 + 1/2 + 1/3 + 1/4 + 1/5 ...
This will of course occupy the floating point unit, but not necessarily the central processing unit. So if this approach is good or not is the main question you should pose.
Just simple busy loops will increase the CPU usage -- I am not aware if doing FP calculations will change this significantly or otherwise be able to achieve a more consistent load factor, even though it does exercise the FPU and not just the ALU.
While creating a similar proof-of-concept in C# I used a fixed number of threads and changed the sleep/word durations of each thread. Bear in mind that this process isn't exact and is subject to both CPU and process throttling as well as other factors of modern preemptive operating systems. Adding VMware to the mix may also compound the observed behaviors. In degenerate cases harmonics can form between the code designed to adjust the load and the load reported by the system.
If lower-level constructs were used (generally requiring "kernel mode" access) then a more consistent throttle could be implemented -- partially because of the ability to avoid certain [thread] preemptions :-)
Another alternative that may be looked into, with the appropriate hardware, is setting the CPU clock and then running at 100%. The current Intel Core-i line of chips is very flexible this way (the CPU multiplier can be set discreetly through the entire range), although access from Java may be problematic. This would be done in the host, of course, not VMware.
Happy coding.

Categories