Create a Synthetic Vision System - java

I would like to add this feature to my android app:
https://en.wikipedia.org/wiki/Synthetic_vision_system
In short it I believe it is a terrain heightmap. I specifically need help with rendering the terrain. The rest of the display I can accomplish.
After a day of google I appear to be no closer. My research seems to point to using opengl, heightmaps and SRTM. However I have no clue how to tie it all together. None of the java examples are android specific.
Alternatively maybe using openstreetmaps and a tile overlay but I cant establish if it is possible in 3D.
The app will be a moving map based on GPS position of the aircraft. As the aircraft moves over the earth the terrain ahead will be updated.
Can someone point me in the right direction?

I can recommend you to use a vector format with few layers. For example, layer of depth, layer of landscape type and layer of objects. To build 3D from this data you need to divide the map in small tiles and load data in memory for visible area only. Your 3D builder will parse this data. E.g. a simple renderer (using OpenGL)... You create mesh as large as tile size and with enough count of vertexes. Next, you parse depth layer and move each vertex along Z-axis like it need. After you need to set color like it specified in type layer. And finally, you need create and locate objects like object layer data (it's already built-in meshes). When you build needed tiles you pass it to renderer.

Related

Square BoundingBox with OpenGL JOGL Java

I'm trying to make a project in OpenGL using JOGL.
If you see my image http://imgur.com/DDHoXEz, I have 4 viewports with different projections but all Teapots are out of "scale", and I want to make something like a bounding box, a square with side 1, that contains all objects on the viewports, to make a scale out of the square.
Any tips?
Unless you're going to use the base teapot model for programs (which you shouldn't), I don't think this is something to spend your time on. When you get into actually using your own models, you will have direct control over the scale.
I would recommend at this point learning about different drawing methods in OpenGL (e.g., GL_TRIANGLE_FAN, GL_LINE_LOOP). Then move on to learning about vertex arrays and maybe write an OBJ importer. I can point you in the right direction if you'd like.
Here is a good place to get started on different drawing techniques.
Happy coding!

Scroll a java 2d game tile map, while generating additional tiles

Finally decided to learn 2d (for now) java game programming. Am working on a game that has a central object that the user will guide with the directional keys. I have that working perfectly, cobbled together from examples and tutorials I've found.
I'm using this method of generating colored background tiles but I'd like to scroll (move) the background as the primary object the user is moving reaches the window edges. I'm fairly sure I can make that work, I have the basics in place, but I can't find a good tutorial or actual demonstration of a way to continue to generate additional tiles to fill in the space the user is moving too.
At this point, this is purely background and I have no need to save the exact tiles generated - but eventually I would like this ability. I'm sure I'll have to find a way to divide the areas into "chunks" like minecraft does.
But for now - how can I continually fill in the area with the same pattern? Or is there a better way to create the tiles that's better for this?
Instead of a solid color you can use a TexturePaint, as shown here. Let your model contain a reference to the desired texture for each grid cell. Let your view use a flyweight pattern for rendering, as illustrated here.

How to transform a mesh?

I am very new to developing applications with LibGDX and 3D apps in general and I would just like to know how to move around a 3D object that I imported from blender. I have checked the Mesh class for a transform method but I can't find one.
Moving objects around in 3D is normally done by manipulating a transformation matrix. LibGDX doesn't seem to be open source anymore (for 1 day - April 1st ;P) so I can't tell you how to do it there but I guess the mesh class isn't the right place to look for. Meshes normally only represent a shape/mesh without any position. You'd often create an object/entity and assign a mesh to it. Then you change the transformation of that entity.
As I said, transforming entities is often either done by calling some move/scale/rotate methods or by creating a transformation matrix yourself and loading it into the graphics pipeline. Modern 3D graphics applications normally use the shaders for that and just load the transformation matrix into the shader pipeline.

Collidable color Java/Android game

I'm trying to develop side scrolling game for android involving many many textures so I was thinking if I could create a separate layer, all a single unique color (very similar to a green screen effect) make a collidable and make it invisible to the player.
(foreground layer) visual Image
(2nd layer)collidable copy of foreground layer with main character
(3rd layer)Background image
I not sure if this is possible or how to implement it efficiently, the idea just came to me randomly one day.
Future regards, Thanks
I assume your game is entirely 2D, using either bit-blits or quads (two 3D triangles always screen-aligned) as sprites. Over the years there have been lots of schemes for doing collision detection using the actual image data, whether from the background or the sprite definition itself. If you have direct access to video RAM, reading one pixel position can quickly tell if you've collided or not, giving pixel-wise accuracy not possible with something like bounding boxes. However, there are issues greatly complicating this: figuring out what you've collided with, or if your speed lands you many pixels into a graphical object, or if it is thin and you pass through it, or how to determine an angle of deflection, etc.
Using 3D graphics hardware and quads, you could potentially change render states, rendering in monochrome to an off-screen texture, yielding the 2nd collidable layer you described. Yet that texture is then resident in graphics memory, which isn't freely/easily accessible like your system memory is. And getting that data back/forth over the bus is slow. It's also costly, requiring an entire additional render pass (worst case, halving your frame rate) plus you have all that extra graphics RAM used up... all just to do something like collision-detect. Much better schemes exist, especially using data structures.
It's better to use bounding boxes, or even a hierarchy of sub-bounding boxes. After that, you can determine if you've landed on the other side of, say, a sloped line, requiring only division/addition operations. Your game already manages all the sprites you're moving, so integrate some data structures to help your collision detection. For instance, I just suggested in another thread the use of linked lists to limit the objects you must collision-detect against one another.
Ideas like yours might not always work, but your continual creative thinking will lead to ones that do. Sometimes you just have to try coding them to find out!

Getting boundary information from a 3d array

Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.

Categories