How to reduce time to render large surface data in OpenGL - java

I am currently working on a project that renders large oil-wells and sub-surface data on to the android tablet using OpenGL ES 2.0.
The data comes in from a restful call made by the client (Tablet) to the server. I need to render two types of data. One being a set of vertices where I just join all the vertices (Well rendering) and the other is the subsurface rendering where each surface has huge triangle-data associated with them.
I was able to reduce the size of the well by approximating the next point and constructing the data that is to be sent to the client. But this cannot be done to the surface data as each and every triangle is important to get the triangles joined and form the surface.
I would appreciate if you guys suggest an approach to either reduce the data from the server or to reduce the time taken to render such a huge data effectively.

the way you can handle such complex mesh really depends on the scope of your project. Unfortunately there is no much we can say based on the provided inputs and the activity itself is not an easy task.
Usually when the mesh is very complex a typical approach to make the rendering process fast is to adopt dynamic Level Of Details (in programming terminology LOD).
The idea is to render "distant" meshes with a very low LOD (and therefore having a much lower number of vertices to be rendered) and there replace the mesh with an higher resolution every time the camera approaches the mesh's details.
This is a technique very used in computer games, for instance when a terrain needs to be rendered. When the player is in a particular sector of the MAP, the mesh of that sector is in High level of detail, the others are in low detail. As soon as the player moves, the different sectors become in "high resolution" (allow me the term).
It is not an easy way to do it but it works in many many situations.
In this gamasutra article, there are plenty of information on how this technique works:
http://www.gamasutra.com/view/feature/131596/realtime_dynamic_level_of_detail_.php?print=1
The idea, in your case, would be to take the mesh provided by the web service and handle it as the HD version of the mesh. Then (particularly if the mesh is composed by different objects), apply a triangular mesh simplification algorithm to create LD meshes of the same objects. An example on the way you could proceed is well described here:
http://herakles.zcu.cz/~skala/PUBL/PUBL_2002/2002_Mesh-Simplification-ICCS2002.pdf
I hope to have helped in some way.
Cheers
Maurizio

Related

How to update points of TriangleMesh more efficiently?

So I am working on creating a 3D modeling toolkit in javafx. In this toolkit people will be able to load in model files and try out animations on said models.
Currently I split up each model in groups of TriangleMesh objects, where each Mesh in the group uses the same material. Then I have an AnimationTimer that sequences frames of the loaded animation and for each next frame updates all changed points in each TriangleMesh. However performing frequent updates on the points lists is reducing the performance of the program considerably.
I am wondering whether this can be optimized?
I had some ideas, for one I thought it might help to have a single TriangleMesh for each model (though this is problematic because I cannot set the material of individual faces) but this still leaves me with the overhead of updating the observable points list so frequently.
Having lots of separate TriangleMeshes is indeed very inefficient. Why do you think you need that? If it is just different textures that you want to apply to different parts of your model then you could create a texture atlas. This would allow you to use one TriangleMesh per model and that should make things more efficient. How large the influence of your coordinate modifications is remains to be tested.

Where to start with voxel engine?

I have been working on a voxel game for some time now, but all that I have really accomplished was the main menu and an Item system. Now its time to make the voxel engine. I have been searching for a while now to find some tutorials or an ebook that will teach me such, but the best i could find were someones tutorials in c++, but I am making mine in Java. I have dabbled in c++ and c# in the past but it was too difficult to translate i.e. it relied on a class that java doesn't have. What I know is that there are different methods for voxel engines, they all begin with rendering a single cube, and Perlin and Simplex noise can be used to randomize terrain generation.
If anyone could point me in the correct direction, most appreciated.
I will be checking back at least once a hour incase someone feels this thread is dead.
I'm not entirely sure what you are asking, if you are asking how to make simplex noise, implement it in a voxel engine or how to start making a voxel engine.
If you are asking how to start making a voxel engine I would recommend practising with quads first (2D version) and focus on getting an understanding for the theory. Once you are happy with your understanding you should focus on the voxel class (one cube) - it is very important to learn as much as you can from it, and then add more so you can optimize rendering as much as you can, such that hidden faces are not rendered and even vertices are shared, voxel engines can be the most wasteful renderers if not optimized!
EDIT:
Optimization can be done through many methods, The first and most important is hidden face removal, this involves removing the faces of voxels that are touching which will mean you will need to check of a voxel exists on any given side of any voxel before rendering that face (e.g before rendering the left face, check if there isn't a block to the left). Next is the rendering method, do not render each face or each group individually, group them so they can be rendered faster, this can be done by using display-lists or the more technical VBOs, these ensure the data is in the GPU or the data can be given to the GPU faster, For example Minecraft groups them in chunks of huge 16x16x128 groups and uses display lists. If you really want to reduce every single vertex in memory you can also consider using strip drawing methods (in OpenGL), these will require you to define certain vertices at a certain time in rendering but allow you to reuse a vertex for multiple faces.
Next would be understanding simplex noise, I can relate to there not being much material online for noise generation algorithms, unfortunately I cannot link material that I used as that was years ago. You can implement your noise algorithm in the 2D version to prove it works in a simpler environment and then copy it to the voxel version. Typical usage would be to use the values as heights in the terrain (e.g white=255 = 255 high).
I would recommend using Unity. The engine is already made and you can add menus and titles with just a few lines of code. All of the game creation is either in C# or Javascript which shouldn't be any huge change from C++. Good luck!

Switching from OpenGL ES 1.0 to 2.0

I have been developing an Android app using OpenGL 1.0 for quite some time using a naive approach to rendering, basically making a call to glColor4f(...) and glDrawArrays(...) with FloatBuffers each frame. I am hitting a point where graphics is becoming a huge bottleneck as I add more UI elements and the number of draw calls increases.
So I'm now looking for the best way to group all of these calls into one (or two or three) draw calls. It looks like the cleanest, most efficient and canonical way to do this is to use VBO objects, available from OpenGL ES 2.0 on. However, this would require a HUGE refactoring on my part to switch my whole graphics backend from ES 1.0 to ES 2.0. I am not sure if this is a good decision, or if there are acceptable ways to group my drawing calls in 1.0 that would work fine for relatively simple 2D data (squares, rounded rectangle TRIANGLE_FANs, etc.), or if it really might be worth biting the bullet and making the switch. I might also mention that I have a HEAVY reliance on translation and scaling that is so convenient with the fixed pipeline of ES 1.0.
Looking around, I am surprised to find almost NO people in my position, talking about the tradeoffs and complexity at hand for such a switch. Any thought?
I have a HEAVY reliance on translation and scaling
Note you can't batch anything if you change model-view matrix between drawcalls. (ES2 didn't change that).
Vbo a available from opengl ES 1.1. And they are probably available for the device you are targeting. Even for ES1.0 (ARB_vertex_buffer_object)
You can create a big VBO with world space geometry (=resolve scaling and translation with cpu) and draw that. Even if you update this vbo each frame, in my experience, it's fast enough. Send thousands of small drawcalls is almost always the slowest.
Moving from a fixed pipline to a full vertex/fragment shader pipline is not easy at all. It require a good amount of knowledge in 3d. careful. Write a prototype first. (world-space or object-space lighting ? how transform normals ? ...)
Vivien

3d Reconstruction from live video feed

i was wondering if anyone has knowledge on the recontruction of 3D objects from live video feed. Does any have any java based examples or papers JAVA based that i could be linked to as i have read up on algorithm's used to produce such 3d objects. If possible i would like to construct something such as the program demostrated in the link provided below.
Currently my program logs live video feed.
http://www.youtube.com/watch?v=brkHE517vpo&feature=related
3D reconstruction of an object from a single point of view is not really possible. You have two basic alternatives: a) To have a stereo camera system capturing the object, b) To have only one camera, but rotating the object (so you will have different points of view of the object), like the one in the video. This is a basic concept related with epipolar geometry.
There are other alternatives, but more intrusive. Some time ago I've been working on a 3D scanner based on a single camera and a laser beam.
For this, I used OpenCV which is C++ code, but now I think there are ports for Java. Have in mind that 3D reconstruction is not an easy task, and the resulting app. will have to be largely parametrized to achieve good results.
This isn't a solved problem - certain techniques can do it to a certain degree under the right conditions. For example, the linked video shows a fairly simple flat-faced object being analysed while moving slowly under relatively even lighting conditions.
The effectiveness of such techniques can also be considerably improved if you can get a second (stereo vision) video feed.
But you are unlikely to get it to work for general video feeds. Problem such as uneven lighting, objects moving in front of the camera, fast motion, focus issues etc. make the problem extremely hard to solve. The best you can probably hope for is a partial reconstruction which can then be reviewed and manually edited to correct the inevitable mistakes.
JavaCV and related projects are probably the best resource if you want to explore further. But don't get your hopes too high for a magic out-of-the-box solution!

What is the best way to read, represent and render map data?

I am interested in writing a simplistic navigation application as a pet project. After searching around for free map-data I have settled on the US Census Bureau TIGER 2007 Line/Shapefile map data. The data is split up into zip files for individual counties and I've downloaded a single counties map-data for my area.
What would be the best way to read in this map-data into a useable format?
How should I:
Read in these files
Parse them - Regular expression or some library that can already parse these Shapefiles?
Load the data into my application - Should I load the points directly into some datastructure in memory? Use a small database? I have no need for persistence once you close the application of the map data. The user can load the Shapefile again.
What would be the best way to render the map once I have read the in the Shapefile data?
Ideally I'd like to be able to read in a counties map data shapefile and render all the poly-lines onto the screen and allow rotating and scaling.
How should I:
Convert lat/lon points to screen coordinates? - As far as I know the Shapefile uses longitude and latitude for its points. So obviously I'm going to have to convert these somehow to screen coordinates to display the map features.
Render the map data (A series of polylines for roads, boundaries, etc) in a way that I can easily rotate and scale the entire map?
Render my whole map as a series of "tiles" so only the features/lines within the viewing area are rendered?
Ex. of TIGER data rendered as a display map:
Anyone with some experience and insight into what the best way for me to read in these files, how I should represent them (database, in memory datastructure) in my program, and how I should render (with rotating/scaling) the map-data on screen would be appreciated.
EDIT: To clarify, I do not want to use any Google or Yahoo maps API. Similarly, I don't want to use OpenStreetMap. I'm looking for a more from-scratch approach than utilizing those apis/programs. This will be a desktop application.
First, I recommend that you use the 2008 TIGER files.
Second, as others point out there are a lot of projects out there now that already read in, interpret, convert, and use the data. Building your own parser for this data is almost trivial, though, so there's no reason to go through another project's code and try to extract what you need unless you plan on using their project as a whole.
If you want to start from the lower level
Parsing
Building your own TIGER parser (reasonably easy - just a DB of line segments), and building a simple render on top of that (lines, polygons, letters/names) is also going to be fairly easy. You'll want to look at various map projection types for the render phase. The most frequently used (and therefore most familiar to users) is the Mercator projection - it's fairly simple and fast. You might want to play with supporting other projections.
This will provide a bit of 'fun' in terms of seeing how to project a map, and how to reverse that projection (say a user clicks on the map, you want to see the lat/lon they clicked - requires reversing the current projection equation).
Rendering
When I developed my renderer I decided to base my window on a fixed size (embedded device), and a fixed magnification. This meant that I could center the map at a lat/lon, and with the center pixel=center lat/lon at a given magnification, and given the mercator projection I could calculate which pixel represented each lat/lon, and vice-versa.
Some programs instead allow the window to vary, and instead of using magnification and a fixed point, they use two fixed points (often the upper left and lower right corners of a rectangle defining the window). In this case it becomes trivial to determine the pixel to lat/lon transfer - it's just a few interpolation calculations. Rotating and scaling make this transfer function a little more complex, but shouldn't be considerably so - it's still a rectangular window with interpolation, but the window corners don't need to be in any particular orientation with respect to north. This adds a few corner cases (you can turn the map inside out and view it as if from inside the earth, for instance) but these aren't onerous, and can be dealt with as you work on it.
Once you've got the lat/lon to pixel transfer done, rendering lines and polygons is fairly simple except for normal graphics issues (such as edges of lines or polygons overlapping inappropriately, anti-aliasing, etc). But rendering a basic ugly map such as it done by many open source renderers is fairly straightforward.
You'll also be able to play with distance and great circle calculations - for instance a nice rule of thumb is that every degree of lat or lon at the equator is approximately 111.1KM - but one changes as you get closer to either pole, while the other continues to remain at 111.1kM.
Storage and Structures
How you store and refer to the data, however, depends greatly on what you plan on doing with it. A lot of difficult problems arise if you want to use the same database structure for demographics vs routing - a given data base structure and indexing will be fast for one, and slow for the other.
Using zipcodes and loading only the nearby zipcodes works for small map rendering projects, but if you need a route across the country you need a different structure. Some implementations have 'overlay' databases which only contain major roads and snaps routes to the overlay (or through multiple overlays - local, metro, county, state, country). This results in fast, but sometimes inefficient routing.
Tiling
Tiling your map is actually not easy. At lower magnifications you can render a whole map and cut it up. At higher magnifications you can't render the whole thing at once (due to memory/space constraints), so you have to slice it up.
Cutting lines at boundaries of tiles so you can render individual tiles results in less than perfect results - often what is done is lines are rendered beyond the tile boundary (or, at least the data of the line end is kept, though rendering stops once it finds it's fallen off the edge) - this reduces error that occurs with lines looking like they don't quite match as they travel across tiles.
You'll see what I'm talking about as you work on this problem.
It isn't trivial to find the data that goes into a given tile as well - a line may have both ends outside a given tile, but travel across the tile. You'll need to consult graphics books about this (Michael Abrash's book is the seminal reference, freely available now at the preceding link). While it talks mostly about gaming, the windowing, clipping, polygon edges, collision, etc all apply here.
However, you might want to play at a higher level.
Once you have the above done (either by adapting an existing project, or doing the above yourself) you may want to play with other scenarios and algorithms.
Reverse geocoding is reasonably easy. Input lat/lon (or click on map) and get the nearest address. This teaches you how to interpret addresses along line segments in TIGER data.
Basic geocoding is a hard problem. Writing an address parser is a useful and interesting project, and then converting that into lat/lon using the TIGER data is non-trivial, but a lot of fun. Start out simple and small by requiring exact name and format matching, and then start to look into 'like' matching and phonetic matching. There's a lot of research in this area - look at search engine projects for some help here.
Finding the shortest path between two points is a non-trivial problem. There are many, many algorithms for doing that, most of which are patented. I recommend that if you try this go with an easy algorithm of your own design, and then do some research and compare your design to the state of the art. It's a lot of fun if you're into graph theory.
Following a path and pre-emptively giving instructions is not as easy as it looks on first blush. Given a set of instructions with an associated array of lat/lon pairs, 'follow' the route using external input (GPS, or simulated GPS) and develop an algorithm that gives the user instructions as they approach each real intersection. Notice that there are more lat/lon pairs than instructions due to curving roads, etc, and you'll need to detect direction of travel and so forth. Lots of corner cases you won't see until you try to implement it.
Point of interest search. This one is interesting - you need to find the current location, and all the points of interest (not part of TIGER, make your own or get another source) within a certain distance (as the crow flies, or harder - driving distance) of the origin. This one is interesting in that you have to convert the POI database into a format that is easy to search in this circumstance. You can't take the time to go through millions of entries, do the distance calculation (sqrt(x^2 + y^2)), and return the results. You need to have some method or algorithm to cut the amount of data down first.
Traveling salesman. Routing with multiple destinations. Just a harder version of regular routing.
You can find a number of links to many projects and sources of information on this subject here.
Good luck, and please publish whatever you do, no matter how rudimentary or ugly, so others can benefit!
-Adam
SharpMap is an open-source .NET 2.0 mapping engine for WinForms and ASP.NET. This may provide all the functionality that you need. It deals with most common GIS vector and raster data formats including ESRI shapefiles.
the solution is :
a geospatial server like mapserver, geoserver, degree (opensource).
They can read and serve shapefiles (and many other things). For example, geoserver (when installed) serve data from US Census Bureau TIGER shapefiles as demo
a javascript cartographic library like openlayers (see the examples at link text
There are plenty of examples on the web using this solution
Funny question. Here's how I do it.
I gather whatever geometry I need in whatever formats they come in. I've been pulling data from USGS, so that amounts to a bunch of:
SHP Files (ESRI Shapefile Technical Description)
DBF Files (Xbase Data file (*.dbf))
I then wrote a program that "compiles" those shape definitions into a form that is efficient to render. This means doing any projections and data format conversions that are necessary to efficiently display the data. Some details:
For a 2D application, you can use whatever projection you want: Map Projections.
For 3D, you want to convert those latitude/longitudes into 3D coordinates. Here is some math on how to do that: transformation from
spherical coordinates to normal rectangular coordinates.
Break up all the primitives into a quadtree/octree (2D/3D). Leaf nodes in this tree contain references to all geometry that intersects that leaf node's (axis-aligned) bounding-box. (This means that a piece of geometry can be referenced more than once.)
The geometry is then split into a table of vertices and a table of drawing commands. This is an ideal format for OpenGL. Commands can be issued via glDrawArrays using vertex buffers (Vertex Buffer Objects).
A general visitor pattern is used to walk the quadtree/octree. Walking involves testing whether the visitor intersects the given nodes of the tree until a leaf node is encountered. Visitors include: drawing, collision detection, and selection. (Because the tree leaves can contain duplicate references to geometry, the walker marks nodes as being visited and ignores them thereafter. These marks have to be reset or otherwise updated before doing the next walk.)
Using a spatial partitioning system (one of the trees) and a drawing-efficient representation is crucial to achieving high framerates. I have found that in these types of applications, you want your frame rate as high as possible 20 fps at a minimum. Not to mention the fact that lots of performance will give you lots of opportunities to create a better looking map. (Mine's far from good looking, but will get there some day.)
The spatial partitioning helps rendering performance by reducing the number of draw commands sent to the processor. However, there could come a time when the user actually wants to view the entire dataset (perhaps an arial view). In this case, you need a level of detail control system. Since my application deals with streets, I give priority to highways and larger roads. My drawing code knows about how many primitives I can draw before my framerate goes down. The primitives are also sorted by this priority. I draw only the first x items where x is the number of primitives I can draw at my desired framerate.
The rest is camera control and animation of whatever data you want to display.
Here are some examples of my existing implementation:
Picture http://seabusmap.com/assets/Picture%205.png Picture http://seabusmap.com/assets/Picture%207.png
for storing tiger data locally, I would chose Postgresql with the postgis tools.
they have an impressive collection of tools, for you especially the Tiger Geocoder offers good way of importing and using the tiger data.
you will need to take a look at the tools that interact with postgis, most likely some sort of mapserver
from http://postgis.refractions.net/documentation/:
There are now several open source tools which work with PostGIS. The uDig project is working on a full read/write desktop environment that can work with PostGIS directly. For internet mapping, the University of Minnesota Mapserver can use PostGIS as a data source. The GeoTools Java GIS toolkit has PostGIS support, as does the GeoServer Web Feature Server. GRASS supports PostGIS as a data source. The JUMP Java desktop GIS viewer has a simple plugin for reading PostGIS data, and the QGIS desktop has good PostGIS support. PostGIS data can be exported to several output GIS formats using the OGR C++ library and commandline tools (and of cource with the bundled Shape file dumper). And of course any language which can work with PostgreSQL can work with PostGIS -- the list includes Perl, PHP, Python, TCL, C, C++, Java, C#, and more.
edit: depite mapserver having the word SERVER in its name, this will be usable in a desktop environment.
Though you already decided to use the TIGER data, you might be interested in OSM (Open Street Map), beacuse OSM has a complete import of the TIGER data in it, enriched with user contributed data. If you stick to the TIGER format, your app will be useless to international users, with OSM you get TIGER and everything else at once.
OSM is an open project featuring a collaboratively edited free world map. You can get all this data as well structured XML, either query for a region, or download the whole world in a large file.
There are some map renderers for OSM available in various programming languages, most of them open source, but still there is much to be done.
There also is an OSM routing service avaliable. It has a web-interface and might also be queriable via a web service API. Again, it's not all finished. Users could definitely use a desktop or mobile routing application built on top of this.
Even if you don't decide to go with that project, you can get lots of inspiration from it. Just have a look at the project wiki and at the sources of the various software projects which are involved (you will find links to them inside the wiki).
You could also work with Microsoft's visual earth mapping application and api or use Google's api. I have always programmed commercially with ESRI products and have not played with the open api's that much.
Also, you might want to look at Maker! and Finder! They are relatively new programs but I think they are free. Might be limited on embedding the data.Maker can be found here.
The problem is that spatial processing is fairly new in the non commercial scale.
If you don't mind paying for a solution Safe Software produces a product called FME. This tool will help you translate data from any format to just about any other. Including KML the Google Earth Format or render it as a JPEG (or series of JPEGs). After converting the data you can embed google earth into your application using their API or just display the tiled images.
As a side not FME is a very powerful platform so while doing your translations you can add or remove parts of data that you don't necessarily need. Merge sources if you have more than one. Convert coordinates (I don't remember what exactly Google Earth uses). Store backups in a database. But seriously if your willing to shell out a few bucks you should look into this.
You can also create flags (much like in your sample map) which contain a location (where to put it) and other data/comments about the location. These flags come in many shapes and sizes.
One simplification over a Mercator or other projection is to assume a constant conversion factor for the latitude and longitude. Multiply the degrees of latitude by 69.172 miles; for the longitude, pick the middle latitude of your map area and multiply (180-longitude) by cosine(middle_latitude)*69.172. Once you've converted to miles, you can use another set of conversions to get to screen coordinates.
This is what worked for me back in 1979.
My source for the number of miles per degree.
When I gave this answer the question was labeled
"What would be the best way to render a Shapefile (map data) with polylines in .Net?"
Now it is a different question but I leave my answer to the original question.
I wrote a .net version that could draw
vector-data (such as the geometry from
a shp file) using plain GDI+ in c#. It
was quite fun.
The reason was that we needed to
handle different versions of
geometries and attributes with a lot
of additional information so we could
not use a commercial map component or
an open source one.
The main thing when doing this is
establish a viewport and
translate/transform WGIS84 coordinates
to a downscale and GDI+ x,y
coordinates and wait with projection
if you even need to reproject at all.
One solution is to use MapXtreme. They have API's for Java and C#. The API is able to load these files and render them.
For Java:
http://www.mapinfo.com/products/developer-tools/desktop%2c-mobile-%26-internet-offering/mapxtreme-java
For .NET:
http://www.mapinfo.com/products/developer-tools/desktop%2c-mobile-%26-internet-offering/mapxtreme-2008
I used this solution in a Desktop application and it worked well. It offers a lot more that only rendering information.
Now doing this from scratch could take quite a while. They do have an evaluation version that you can download. I think it just prints "MAPXTREME" over the map as a watermark, but it is completely usable otherwise

Categories