I realize this is not strictly a code question, but I guess it belongs here anyway. If not, my apologies in advance.
Being as there's no inbuilt way to change the vibration intensity for the droid in code I'm using a kind of PWM control (switching the vibrator on and off at high frequency gives me a kind of control over vibration intensity). Right now I'm using a 20ms period (for example, with a 50% duty cycle the vibrator is on for 10ms and off for 10ms and it kind of feels like half power).
My question is, can some damage occurr to the vibrator motor using this kind of control?
I'm no engineer, but we're in luck because there is one sitting next to me. Apparently there's a kind of life cycle to things that relates in some ways to altering the state and in some other ways to duration of use so yes doing what you're talking about will stress the device in one way by trying to get something to go from 0% to 100% and back again very rapidly, but relieve some stress by only having it on half the time. Overall, what you're talking about doing shouldn't do any harm that would shorten the Android's life span as long as this pattern isn't intended to run for very long. I would definitely suggest getting in touch with someone who knows the mechanical part of the device more intimately because every device is different and general knowledge doesn't always translate into spot-on specific knowledge.
Related
I'm developing a Live Wallpaper using OpenGL ES 3.0. I've set up according to the excellent tutorial at http://www.learnopengles.com/how-to-use-opengl-es-2-in-an-android-live-wallpaper/, adapting GLSurfaceView and using it inside the Live Wallpaper.
I have a decent knowledge of OpenGL/GLSL best practices, and I've set up a simple rendering pipeline where the draw loop is as tight as possible. No re-allocations, using one static VBO for non-changing data, a dynamic VBO for updates, using only one draw call, no branching in the shaders et cetera. I usually get very good performance, but at seemingly random but reoccurring times, the framerate drops.
Profiling with the on-screen bars gives me intervals where the yellow bar ("waiting for commands to complete") shoots away and takes everything above the critical 60fps threshold.
I've read any resources on profiling and interpreting those numbers I can get my hands on, including the nice in-depth SO question here. However, the main takeaway from that question seems to be that the yellow bar indicates time spent on waiting for blocking operations to complete, and for frame dependencies. I don't believe I have any of those, I just draw everything at every frame. No reading.
My question is broad - but I'd like to know what things can cause this type of framerate drop, and how to move forward in pinning down the issue.
Here are some details that may or may not have impact:
I'm rendering on demand, onOffsetsChanged is the trigger (render when dirty).
There is one single texture (created and bound only once), 1024x1024 RGBA. Replacing the one texture2D call with a plain vec4 seems to help remove some of the framerate drops. Reducing the texture size to 512x512 does nothing for performance.
The shaders are not complex, and as stated before, contain no branching.
There is not much data in the scene. There are only ~300 vertices and the one texture.
A systrace shows no suspicious methods - the GL related methods such as buffer population and state calls are not on top of the list.
Update:
As an experiment, I tried to render only every other frame, not requesting a render every onOffsetsChanged (swipe left/right). This was horrible for the look and feel, but got rid of the yellow lag spikes almost completely. This seems to tell me that doing 60 requests per frame is too much, but I can't figure out why.
My question is broad - but I'd like to know what things can cause this
type of framerate drop, and how to move forward in pinning down the
issue.
(1) Accumulation of render state. Make sure you "glClear" the color/depth/stencil buffers before you start each render pass (although if you are rendering directly to the window surface this is unlikely to be the problem, as state is guaranteed to be cleared every frame unless you set EGL_BUFFER_PRESERVE).
(2) Buffer/texture ghosting. Rendering is deeply pipelined, but OpenGL ES tries to present a synchronous programming abstraction. If you try to write to a buffer (SubBuffer update, SubTexture update, MapBuffer, etc) which is still "pending" use in a GPU operation still queued in the pipeline then you either have to block and wait, or you force a copy of that resource to be created. This copy process can be "really expensive" for large resources.
(3) Device DVFS (dynamic frequency and voltage scaling) can be quite sensitive on some devices, especially for content which happens to sit just around a level decision point between two frequencies. If the GPU or CPU frequency drops then you may well get a spike in the amount of time a frame takes to process. For debug purposes some devices provide a means to fix frequency via sysfs - although there is no standard mechnanism.
(4) Thermal limitations - most modern mobile devices can produce more heat than they can dissipate if everything is running at high frequency, so the maximum performance point cannot be sustained. If your content is particularly heavy then you may find that thermal management kicks in after a "while" (1-10 minutes depending on device, in my experience) and forcefully drops the frequency until thermal levels drop within safe margins. This shows up as somewhat random increases in frame processing time, and is normally unpredictable once a device hits the "warm" state.
If it is possible to share an API sequence which reproduces the issue it would be easier to provide more targeted advice - the question is really rather general and OpenGL ES is a very wide API ;)
I'm currently prototyping a multimedia editing application in Java (pretty much like Sony Vegas or Adobe After Effects) geared towards a slightly different end.
Now, before reinventing the wheel, I'd like to ask if there's any library out there geared towards time simulation/manipulation.
What I mean specifically, , an ideal solution would be a library that can:
Schedule and generate events based on an elastic time factor. For example, real time would have a factor of 1.0, and slow motion would be any lower value; a higher value for time speedups.
Provide configurable granularity. In other words, a way to specify how frequently will time based events fire (30 frames per second, 60 fps, etc.)
An event execution mechanism of course. A way to define that an events starts and terminates at a certain point in time and get notified accordingly.
Is there any Java framework out there that can do this?
Thank you for your time and help!
Well, it seems that no such thing exists for Java. However, I found out that this is a specific case of a more general problem.
http://gafferongames.com/game-physics/fix-your-timestep/
Using fixed time stepping my application can have frame skip for free (i.e. when doing live preview rendering) and render with no time constraints when in off-line mode, pretty much what Vegas and other multimedia programs do.
Also, by using a delta factor between each frame, the whole simulation can be sped up or slowed down at will. So yeah, fixed time stepping pretty much nails it for me.
I have written a game app with bitmaps moving around the screen. It employs a separate thread which writes directly to a canvas. On my Samsung Galaxy Y the animations seems smooth throughout the game, however on a "Tabtech m7" tablet the smooth graphics appear to be interrupted by intermittent freezes of about half a second duration, and spaced about three or four seconds apart. Is it possible that it is just a feature of the (cheap) tablet hardware, or is it more likely that it's some aspect of my programming? And if it's me, how could I go about diagnosing the cause?
Have a look in your log to see if the garbage collector is running approximately when you get the freezes. If so you could perhaps try and find out if its you or the system that is allocation memory in a inappropriate way.
In DDMS you can have a look at the Allocation Tracker, could possibly tell you whats going on.
Yes, echoing erbsman. To avoid GC make sure you're not allocating any new objects in your game loop. Also, GC's can be kicked off if you do lot of string conversions (ie, updating score) Like if you do Integer.toString(10) kinda stuff.
All I know is that delta relates somehow to adapting to different frame rates, but I'm not sure exactly what it stands for and how to use it in the math that calculates speeds and what not.
Where is delta declared? initialized?
How is it used? How are its values (min,max) set?
It's the number of milliseconds between frames. Rather than trying to build your game on a fixed number of milliseconds between frames, you want to alter your game to move/update/adjust each element/sprite/AI based on how much time has passed since the last time the update method has come around. This is the case with pretty much all game engines, and allows you to avoid needing to change your game logic based on the power of the hardware on which you're running.
Slick also has mechanisms for setting the minimum update times, so you have a way to guarantee that the delta won't be smaller than a certain amount. This allows your game to basically say, "Don't update more often than every 'x' milliseconds," because if you're running on powerful hardware, and have a very tight game loop, it's theoretically possible to get sub-millisecond deltas which starts to produce strange side effects, such as slow movement, or collision detection that doesn't seem to work the way you expect.
Setting a minimum update time also allows you to minimize recalculating unnecessarily, when only a very, very small amount of time has passed.
Have a read of the LWJGL timing tutorial found here. Its not strictly slick but will explain what the delta value is and how to use it.
I am a dummy in profiling, please tell me what you people do to profile your application. Which one is the better, Profiling the whole application or make an isolation? If the choice is make an isolation how you do that?
As far as possible, profile the entire application, running a real (typical) workload. Anything else and you risk getting results that lead you to focus your optimization efforts in the wrong place.
EDIT
Isn't that too hard to get a correct result when profiling the whole application? so the test result is depends on the user interaction (button clicking etc) and not using automatic task? Tell me if I'm wrong.
Getting the "correct result" depends on how you interpret the profiling data. For instance, if you are profiling an interactive application, you should figure out which parts of the profile correspond to waiting for user interaction, and ignore them.
There are a number of problems with profiling your application in parts. For example:
By deciding beforehand which parts of the application to profile, you don't get a good picture of the relative contribution of the different parts, and you risk wasting effort on the wrong parts.
You pretty much have to use artificial workloads. Whenever you do that there is a risk that the workloads are not representative of "normal" workloads, and your profiling results are biased.
In many applications, the bottlenecks are due to the way that the parts of the application interact with each other, or with I/O or garbage collection. Profiling different parts of the application separately is likely to miss these interactions.
... what i am looking for is the technique
Roughly speaking, you start with the biggest "hotspots" identified by the profile data and drill down until you've figured out why the so much is being spent in a certain area. It really helps if your profiling tool can aggregate and present the data top down and bottom up.
But, at the end of the day going from the profiling evidence (hotspots, stack snapshots, etc) to the root cause and the remedy is often down to the practical knowledge and intuition that comes from experience.
(Yea ... I'm waffling a bit. But my point is that there is no magic formula for doing this. Ultimately, you've got to use your brain ... like you have to when debugging a complex application.)
First I just time it with a watch to get an overall measurement.
Then I run it under a debugger and take stackshots. What these do is tell me which lines of code are responsible for large fractions of time. In particular, this means lines where functions are called without really needing to be, and I/O that I may not have been aware of.
Since it shows me lines of code that take time and can be done a better way, I fix those.
Then I start over at the top and see how much time I actually saved. I repeat these steps until I can no longer find things that a) take significant % of time, and b) I can fix.
This has been called "poor man's profiling". The little secret is not only is it cheap, but it is very effective, because it avoids the common myths about profiling.
P.S. If it is an interactive application, do all this just to the part of it that is slow, like if you press a "Do Useful Stuff" button, and it finishes a few seconds later. There's no point to taking stackshots when it's waiting for YOU.
P.P.S. Suppose there is some activity that should be faster, but finishes too quickly to take stackshots, like if it takes a second but should take a fraction of a second. Then what you can do is (temporarily) wrap a for loop around it, of 10 or 100 iterations. That will make it take long enough to get samples. After you've speeded it up, remove the loop.
Take a look http://www.ej-technologies.com/products/jprofiler/overview.html