I have a Rubik's cube displayed using OpenGl in a Java Eclipse application and I want to "rotate" this cube in response to mouse events.
I started with a "naive" (isn't it ?) solution as described here: OpenGL - moving camera with mouse. With such solution in addition to the problem described (my problem is maybe the same ???) when I rotate 90 degrees according X-Axis to get 'upside front' i am not to rotate anymore according initial Y-Axis to get the new front on the right. Because of the first 90 degrees rotaion I will have now to rotate according to Z to get expected behavior.
May be using gluLookAt utility method is easiest than using modeling transformation in this case ?
Would arcball make you happy ? (it should)
(I don't usually link to NeHe, but this one is independent from openGL)
Related
I am creating a virtual reality app for android and I would like to generate a Sphere in openGL for my purposes. In a first step, I found this thread(Draw Sphere - order of vertices) where in the first answer there is a good tutorial of how to offline render the sphere. In that same answer, there is a sample code of a sphere(http://pastebin.com/4esQdVPP) that I used for my app, and then I successfully mapped a 2D texture onto the sphere.
However, that sample sphere has a poor resolution and I would like to generate a better one, so I proceeded to follow some blender tutorials to generate the sphere and then export the .obj file and simply take the point coordinates and index and parse them into java code.
The problem when doing this is that when the texture is added it looks broken at the poles of the sphere, while it looks good in the rest of the sphere (please have a look at the following pictures).
I don't know what i'm doing wrong since the algorithm for mapping the texture should be the same, so I guess that maybe the problem is in the index of the points generated. This is the algorithm im using for mapping the texture: https://en.wikipedia.org/wiki/UV_mapping#Finding_UV_on_a_sphere
This is the .obj file autogenerated with blender: http://pastebin.com/uP3ndM2d
And from there, we extract the index and the coordinates:
This is the point index: http://pastebin.com/rt1QjcaX
And this is the point coordinates: http://pastebin.com/h1hGwNfx
Could you give me some advice? Is there anything I am doing wrong?
First of all, determining the texture coordinates at (or even near) the poles needs to be handled with care. Using the UV-algorithm suggested for the s-coordinate at the pole will not give you what you want with the tessellation you chose (e.g., s = 0.5 + arctan2(1,0)/(2*pi) will be used for all points on the north pole). In the image below the M+1 vertices on the top row all represent the same vertex at the north pole -- each of these will have the same t-value but must have different s-values for the texture coordinates:
Second of all, using this type of tessellation will yield aliasing problems near the poles since the small distance between neighboring fragments generate large difference between s-values. You should mitigate the aliasing as much as possible using a mipmap filter. The following images show a mercator projection of the earth and vertical red stripes textured on the sphere (the stripes are a good test case):
A better sphere tessellation is to subdivide an icosahedron which will yield nearly equilaterial triangles. Here is an example of a normal mapped sphere that avoids these aliasing problems:
Ok, the problem is solved now. The textures were not working properly because the generated point indices start at 1 instead of 0. By substracting 1 to all indices the problem is solved... :)
I am doing extensive use of Java WorldWind and have difficulties to implement some more feature with 3d rendering. At first, I had huge difficulties with zoom and BasicOrbitView, as zoom actually changes point of view elevation, which changes the horizon and hence is not a zoom. I solved that using FOV parameter. Decreasing this parameter performs a real zoom, as visualized object is only modified by an homothetic transformation. I explain that to let know the level of details I hide behind words such as "zoom" or "translate".
Now I have a second issue with "translate": I want to translate the whole earth along screen X and Y axis without horizon change or whatever. Objective is to combine both FOV change with translation change to zoom on some earth edges.
Zooming on some earth edge is possible using roll and pitch, at the condition that the edge is located up on your screen, which forbids to have pole north up, zooming at earth edge on equator for example (if not clear I will provide illustrations). So this attempt was unsuccessful. I worked a lot on BasicOrbitView.setOrientation method unsuccessfully.
I also tried to modify the OpenGL view matrices behind the View, trying to multiply it with a 4x4 matrix describing a translation, unsuccessfully (execution crashes during worldwind subroutines).
Have you an idea on how to implement that translation in worldwind ?
I have this camera that is set up with vecmath.lookatMatrix(eye, center, up).
The movement works fine, forwards, backwards, right, left, these work fine.
What does not seem to work fine is the rotation.
I am not really good at math, so I assume I may be missing some logic here, but I thought the rotation would work like this:
On rotation around the Y-axis I add/sub a value to the X value of the center vector.
On rotation around the X-axis I add/sub a value to the Y value of the center vector.
For example here is rotation to the right: center = center.add(vecmath.vector(turnSpeed, 0, 0))
This actually works, but with some strange behaviour. It looks like the higher the x/y of the center vector value gets, the slower the rotation. I guess it's because through the addition/substraction to the center vector it moves too far away or something similar, I would really like to know what is actually happening.
Actually while writing this, I just realized this can't work like this, because once I have moved around and rotated a bit, and for example I'm in "mid air", the rotation would be wrong....
I really hope someone can help me here.
Rotating a vector for OpenGL should be done using matrices. Linear movement can be executed by simply adding vectors together, but for rotation it is not enough just to change one of the coordinates... if that was the case, how'd you get from (X,0,0) direction to (0,X,0)?
Here is another tutorial, which is C++, but there are Java samples too.
There is a bit of math behind all this - you seem to be familiar with vectors, and probably have a 'feel' of them, which helps.
EDIT - if you are to use matrices in OpenGL properly, you'll need to familiarize yourself with the MVP concepts. You have something to display (the model) which is placed somewhere in your world (view) at which you are looking through a camera (projection).
A working solution to working with a free-flight camera with eye, center, up vectors was posted here:
Free Flight Camera - strange rotation around X-axis
So I have some irregularly spaced data that I want to interpolate onto a regular grid. (I want to do exactly this but in Java) Here's a picture:
Basically I have the x and y coordinates of each point and a z value associated with each point and I want to interpolate between them and fill in the center of my image.
What is the best way to do this using Java? Is there a built in 2D interpolation library I can use or should I try a "roll my own" approach?
This post and this one also seem to be trying to do about what I am but their answers don't quite apply.
Someone else with the same problem but no solution.
Note: I am using JavaFX-2 so if I could somehow use their Interpolator class that'd be great.
.
.
EDIT:
If anyone stumbles upon this and wants to know what I ended up using, it was a Delaunay Triangulation implementation from BGU:
Main Site
Code API
If linear interpolation is sufficient, I suggest you to use a 3d mesh with Gouraud Shading for drawing:
Convert the 2d point cloud to a mesh (you can google for existing algorithms)
Mapping the z value of each point to the vertex' color
Using Gouraud Shading to enable linear interpolation between the vertex colors
Creating a camera on top to the mesh and using a othonormal projection (to avoid perspective)
You say that you can use JavaFX. JavaFX supports 3d scenes and you can build your own meshes. But looking into the JavaDoc of TriangleMesh, I can't find any method to set the vertex color I found only a method to set the (x,y,z) and (u,v) (texture coordinates) coordinates.
I am trying to code a cessna flying around the world using the accelerometer with the min3D framework for android but the rotation is a bit weird.
I'm using this to apply the accelerometer rotation to the object:
cessna.rotation().x = rotX;
cessna.rotation().z = rotZ;
This works fine. I haven't figured out yet how to move in the direction of rotation (I think I have to use trigonometry).
I rotated the object with
cessna.rotation().y++;
just to test what will happen. At 180° the rotation around the x axis is mirrored. So the nose of the plane turns down instead of up.
I think I rotate the Objects around the world axis and not around the local axis from the object. How can I do this? I didn't find any documentation about the min3D framework in the internet :/ .
Thank you if you can help me.
(sorry for the bad English)
If you want to rotate around object local axis. Do this
(in pseudo code - you'll need to find similar functions in min3d)
Translate(object.pos.x,object.pos.y,object.pos.z);
object.rotation().x+=radians(45);// or whatever
if that doesn't work, try wrapping the above two lines in
pushMatrix()
...
popMatrix()
Or similar functions in min 3d to save and then restore the current camera rotation translation matrix.
Have you considered, Processing, which also comes with an Android 'output'?