Java2D / Graphics2D performance - java

Maybe there's someone out there who has spent time on this. I'm working on a graph visualization lib in Java and I just did some performance tests.
When I'm adding about 2000 vertices connected by 1000 - 3000 edges, it gets really, really slow. There are tools out there doing way better (gephi for example).. How do they do it? Isn't Java2D hardware accelerated by default? Do I have to use some OpenGL lib?
I'm drawing the graphs inside a JComponent which gets redrawn by a timer every few milliseconds (doesn't really matter, if I give it 100 ms or 1 ms, it stays really slow).
Is my approach flawed or shouldn't I use Java2D for this?
Thank you for any help!

As Torious suggested you probably want to use a VolatileImage if you are working in Java2D to get the benefits of hardware acceleration.
However - If you want absolute best performance, you are probably better off going for an OpenGL - based solution.
LWJGL ( http://lwjgl.org/ ) is designed for games but allows you to use pretty much all the relevant OpenGL functionality so is pretty good for visualisation as well. Might be worth giving it a try!

Related

Java2D faster Area alternative

I'm using Java2D in conjunction with apache batik to draw some fairly large svg images.
So far it is working quite nicely, but i am frustrated with the performance of areas. In particular, i have three things i want to accomplish:
merge a bunch of colliding shapes to one large area
removing a bunch of shapes from one large area
checking for colliding shapes
naively, point 1 and 2 can be accomplished with Area.add and Area.subtract.
This works, but can easily take up to twenty minutes in an average use case.
Point 3 can be accomplished by subtracting the areas from each other and checking the remaing area. Still slow, but can be sped up to be usable by using some prior spatial hashing or something similar.
Is there a better and faster way to merge/subtract Java2D areas?
If not, is there another library which can do this sort of thing faster?
unfortunately, libraries like JOGL or LWJGL do not work on a resolution independent space like svg-paths or the Java2D Paths.
You can try this: AreaX
According to the author:
The AreaX class is intended to achieve exactly the same visual results as the Area class. However several possible optimizations have been carefully implemented to reach those results faster.

Interesting Live Wallpaper Behavior

I just started making some of my first live wallpapers in Android, and I noticed an interesting behavior regarding the PixelFormat. If I use the SurfaceHolder's default PixelFormat, my live wallpaper is a bit laggy. If I set the PixelFormat to RGB_565 it seems to fix this problem. This really should not be too surprising. What was odd was profiling reveal that it was taking just as long to do the rendering in both formats. Could anyone explain this behavior.
Thanks,
Xor
---Edit---
If it of any help, I am rendering on a Canvas. All I do is call drawColor and draw 3 fairly simple, anti-aliased paths. Not really much to it.
PixelFormat shouldn't be a problem. You should be even able to set PixelFormat.RGBA_8888 with no performance hiccups. In some cases this format is useful to reduce color banding on gradients.
Using Handler for animation may be good for simple cases, but you should consider using separate thread for this task. Some time ago I've prepared simple live wallpaper template. You can download whole project for GitHub and experiment a bit with it. I'm sure that you'll get much better performance.

Switching from OpenGL ES 1.0 to 2.0

I have been developing an Android app using OpenGL 1.0 for quite some time using a naive approach to rendering, basically making a call to glColor4f(...) and glDrawArrays(...) with FloatBuffers each frame. I am hitting a point where graphics is becoming a huge bottleneck as I add more UI elements and the number of draw calls increases.
So I'm now looking for the best way to group all of these calls into one (or two or three) draw calls. It looks like the cleanest, most efficient and canonical way to do this is to use VBO objects, available from OpenGL ES 2.0 on. However, this would require a HUGE refactoring on my part to switch my whole graphics backend from ES 1.0 to ES 2.0. I am not sure if this is a good decision, or if there are acceptable ways to group my drawing calls in 1.0 that would work fine for relatively simple 2D data (squares, rounded rectangle TRIANGLE_FANs, etc.), or if it really might be worth biting the bullet and making the switch. I might also mention that I have a HEAVY reliance on translation and scaling that is so convenient with the fixed pipeline of ES 1.0.
Looking around, I am surprised to find almost NO people in my position, talking about the tradeoffs and complexity at hand for such a switch. Any thought?
I have a HEAVY reliance on translation and scaling
Note you can't batch anything if you change model-view matrix between drawcalls. (ES2 didn't change that).
Vbo a available from opengl ES 1.1. And they are probably available for the device you are targeting. Even for ES1.0 (ARB_vertex_buffer_object)
You can create a big VBO with world space geometry (=resolve scaling and translation with cpu) and draw that. Even if you update this vbo each frame, in my experience, it's fast enough. Send thousands of small drawcalls is almost always the slowest.
Moving from a fixed pipline to a full vertex/fragment shader pipline is not easy at all. It require a good amount of knowledge in 3d. careful. Write a prototype first. (world-space or object-space lighting ? how transform normals ? ...)
Vivien

jMonkey optimization similar to Java3D's

Edit: For having real-time drawing, started using lwjgl which is base of jmonkeyengine and jocl in an "interoperability" between opengl and opencl, now can calculate and draw 100k particles real-time. Maybe mantle version of jmonkey engine can cure this drawcall overhead problem.
For several days, I have been learning jMonkey engine(ver:3.0) in Eclipse(java 64 bit) and trying how to optimize a scene with using GeometryBatchFactory.optimize(rootNode); command.
Without optimization(with capability of changing spheres positions):
Okay, only 1-fps is originated from both pci-express bandwidth+jvm overhead.
With optimization(without capability to change positions of spheres):
Now it is 29 fps even with increased triangle number.
Java3D had a setCapability() method which makes a scene object be able to be read/written even in an optimized form. jMonkey engine 3.0 must be capable of this subject but I couldn't find any trace of it(searched tutorials and examples, failed).
Question: How can I set read/write position/rotation/scale capabilities of optimized nodes of a scene in jMonkey 3.0? If you cannot give an answer to first question, can you tell me why triangle numbers increase when I use optimization command? Do I have to create a new method to access the graphics card and change the variables myself(jogl maybe?)?
Scene information: 16k particles(spheres of 16x16 res) + 1 point light(and its 4096 resolutioned shadow).
I'm sure we can send several thousands of float numbers in a millisecond through pci-express with ease.
Additional info: I'm using Aparapi-kernels to update particle
positions which takes 10 milliseconds(16k * 16k interactions to
calculate forces).(does not change anything in optimized mode :( )
Can aparapi access those optimized data?
For the case of batchNode.batch(); optimization, here is 1 fps again with lessened object-numbers:
Object number is now only several hundreds but fps is still at 1!
Sending just sphere positions to gpu and letting it calculate the vertex positions could be better than calculating vertexes on cpu plus sending huge data to gpu.
No-one here to help? Already tried batchNode but did not help enough.
I dont want to change 3d api because jMonkey people already reinvented the wheel and I'm happy with current situation. Just trying to squeeze a little more performance(canceling shadows gives %100 speed but quality is important too!).
This java program will become an asteroid-impact scene simulator(there will be choice of asteroid size,mass,speed,angle) with marching-cubes algorithm with LOD(will be millions of particles).
Marching-cubes algorithm would decrease the triangle numbers greatly. If you couldnt give any answer the question, any marching-cubes(or any O(n) convex hull) algorithm for java will be accepted! Data: x,y,z arrays as source and triangle-strip-array as target(iso-surface mesh points)
Thanks.
Here are some samples about the stream(with a much lower resolution):
1)Collapsing of a cube-shaped rock-group by gravitation:
2)Exclusion force starts to show itself:
3)Exclusion force + gravitation makes the group form a more smooth shape:
4)Group forms a sphere(as expected):
5)Then, a big stellar body approaches:
6)About to touch:
7)The moment of impact:
With help of Barnes-Hutt algorithm and a truncated potential, particle numbers will be 10x(maybe 100x) more.
Rather than Marching-Cubes algorithm, a ghost cloth which wraps the nbody can give a low-resolutioned hull(more easier than BH but need more computation)
Ghost cloth will be affected by nbody(gravity + exclusion) but nbody will not be affected by cloth which wraps it. Nbody wont be rendered but cloth mesh will be rendered with lower triange count.
If MC or above works, this will let the program render a wrapping-cloth for ~200x more particles.
So sorry....
You can batch all Geometries in a scene (or a subnode) that remains static.
Batching means that all Geometries with the same Material are combined into one mesh. This optimization only has an effect if you use only few (roughly up to 32) Materials total. The pay-off is that batching takes extra time when the game is initialized
The change in triangles therefore is because they have been all assembled into one mesh.... The only suggestion if this is necessary, is trying to get the mesh and altering points on it, but at that point I don't think it makes sense.
Perhaps try a different optimization method.
Good luck, haven't used JMonkey in a bit, but glad to see others do and its continued growth!
EDIT
BTW, a way to minimize the math might be to use half a sphere of cubes, an impact on the earth likely wouldn't affect the other side (unless the sphere isn't the earth but already a small sample of the earth taken as a sphere)...
Perhaps try a 2d shape as the impact surface, though I know this won't be your best choice, it might give you an idea of how the number of shapes might have an affect and how grand. If it does then an avenue might be to consider how to remove some of the particles, if it doesn't you need not worry. I am almost sure it will.
Finally:
Perhaps don't render in real time? Take a minute to draw the frames to a buffer then play, by the time your playing you will have another 40 or so frames etc... and maybe approx 30 secs worth is all you will need.
There is a pretty solid set of documentation within the JMonkeyEngine wiki which talks quite a bit about how to utilize the transformations you are referring to, which can be found here: Advanced Spatial Concepts.
In addition, there is quite a bit of information regarding the meshes and their rendering which you can view here: Polygon Meshes.

Fast graphing library

I've been using Incanter for my graphing needs, which was adequate but slow for my previous needs.
Now I need to embed a graph in a JPanel. Users will need to interact with the graph (e.g. clicking on certain points which the program would need to receive and deal with) by dragging and clicking. Zooming in a out is a must as well.
I've heard about JFreeChart on other SO discussions, but I see that Incanter uses that as it's graphing engine, and it seemed somewhat slow then. It it actually fast, but perhaps Incanter is doing things that slow it down?
I'm graphing up to 2 million points (simple xy-plots, really), though generally will be graphing less. Using Matlab, this is plotted in a few seconds, but Incanter can hang for minutes.
So is JFreeChart the way to go? Or something else, given my needs?
(Also, it needs to be free, as it is for research.)
Unfortunately, general purpose graphing solutions probably aren't going to scale well to 2 million points - that's big enough that you will need something specialized if you want interactive performance.
I can see a few sensible options:
Write your own custom "plotter" function that is optimized for drawing large numbers of points. You'd have to test it, but I think you might just about get the performance you want by writing the points directly to a BufferedImage using setRGB in a tight loop. If that still isn't fast enough, you can write the points directly into a byte array and construct a MemoryImageSource.
Exclude points so that you are only drawing e.g. 10,000 points maximum. This may be perfectly acceptable as long as you only really care about the overall shape of the scatter plot rather than individual points.
Pre-render all the points into e.g. a large BufferedImage then allow users to zoom in and out / interact with this static image. You might be able to "hack" JFreeChart to do this.
If OpenGL is an option (will require native code + getting up a steep learning curve!), then drop all the points in a big vertex array and get the graphics card to do it all for you. Will handle 2 million points in real-time without any difficulty.
MathGL is fast and free (GPL) plotting library. But I never test its java interface (swig based) since I'm not familiar with java :( . So, if one can help with testing then I'll be thankful.

Categories