So I'm working on a game in Java 3D and I'm implementing health bars that hover above units.
I started by drawing a quad at a 3D point above the unit's location and applying a Billboard behavior to make it always face the camera.
But the problem I'm stuck with is that the health bars are sometimes obscured by other scenery.
So I'm considering the following options:
Overriding the Z / depth buffer value for the health bar pixels to make the renderer think they're closer to the camera than anything it renders afterwards.
I tried renderingAttributes.setDepthTestFunction(RenderingAttributes.ALWAYS). While it makes the renderer draw the health bar on top of anything it drew in the same area earlier, it doesn't help when the other scenery is drawn on top of the health bar later.
Is there a better way for doing this in Java 3D?
Projecting the 3D locations of the health bars onto a 2D plane in front of the camera. Sounds doable, but before I set off reinventing all the math required for this, maybe someone can point out an existing solution.
Switching from Java 3D to something like LWJGL or jMonkeyEngine (not just for this issue, but general complaints about Java 3D being dead, etc). Although I'm not even sure whether they're more flexible for this particular problem. And from what I've read, one of the worst mistakes in game dev is switching the engine in mid-development.
Use the painter's algorithm: draw the health bars after you've drawn the rest of the scene (and with Z testing turned off, of course).
Related
I am making a 2D game in JavaFX and when detecting collisions, I am getting rather inaccurate results due to the player sprite being set as the fill of a rectangle and therefore not having the intended borders. Is there a way I could make my own shape so thatI could get as accurate as possible?
Another idea I had is checking if the pixel that collided was transparent and then not ending the game if it was. Does anyone know of a way I can get the coordinates of the pixel that collides so that from there I can use PixelReader to check?
If anyone knows a better way, please let me know!
Thanks,
Ethan
There are different ways to do this. Here is one way I have used with good success. I would make hit boxes, that were themselves rectangles. Then during collision detection, I would iterate through all the hit boxes to see if they collided with the flying projectile's hit boxes.
What this allows you to do is fill in complex shapes with smaller rectangles. For example a plane would have one long horizontal rectangle and one smaller rectangle crossing at the middle.
Currently I am using libGDX. In libGDX I use their Polygon object as stated here. https://stackoverflow.com/a/28540488/1490322 I have not seen similar functionality in JavaFX, but it would not be hard to copy what libGDX is doing into JavaFX code... their code is open sourced.
I am writing a program that is supposed to display 3D point clouds. For this purpose, I am using the jMonkeyEngine. Unfortunately, I do not like the default camera behavior of jMonkey. Especially the mouse dragging and mouse wheel do not really do what I want. What I want is them to behave like in the pcd viewer of the PointCloudLibrary.
Mouse wheel: Should be faster, and the the effect of the turning directions should be switched.
Mouse dragging: In jMonkey it seems like mouse dragging changes the camera viewing direction in the world. I am not sure what exactly happens in the pcd viewer, but I believe the camera is moved through the world while fixating the centroid of the displayed point cloud.
How can I change the behavior of the camera to fullfil my wishes? :)
1.
In the simpleInit() method (where 100 is an abritrary number):
getFlyByCamera().setZoomSpeed(100);
getFlyByCamera().setDragToRotate(true);
Note, that zooming doesn't actually change the position of the camera, just the FOV.
2.
The normal behavior of the camera is to rotate around its own axis. By offseting the location of the camera as well, the effect you want can be achieved. In simpleUpdate():
cam.setLocation(cam.getDirection().negate().multLocal(cam.getLocation().length()));
I consider the answer to the second question a bit of a quick hack. But it does the trick.
I am doing extensive use of Java WorldWind and have difficulties to implement some more feature with 3d rendering. At first, I had huge difficulties with zoom and BasicOrbitView, as zoom actually changes point of view elevation, which changes the horizon and hence is not a zoom. I solved that using FOV parameter. Decreasing this parameter performs a real zoom, as visualized object is only modified by an homothetic transformation. I explain that to let know the level of details I hide behind words such as "zoom" or "translate".
Now I have a second issue with "translate": I want to translate the whole earth along screen X and Y axis without horizon change or whatever. Objective is to combine both FOV change with translation change to zoom on some earth edges.
Zooming on some earth edge is possible using roll and pitch, at the condition that the edge is located up on your screen, which forbids to have pole north up, zooming at earth edge on equator for example (if not clear I will provide illustrations). So this attempt was unsuccessful. I worked a lot on BasicOrbitView.setOrientation method unsuccessfully.
I also tried to modify the OpenGL view matrices behind the View, trying to multiply it with a 4x4 matrix describing a translation, unsuccessfully (execution crashes during worldwind subroutines).
Have you an idea on how to implement that translation in worldwind ?
I have been drawing 3D graphics using the graphics.fillPolygon() method in Java. It has worked well for me so far. I can rotate the graphics by dragging my mouse across the screen, and I can zoom in and out of my graphics.
My one issue though, is finding a way to draw the polygons in the correct order so that the background polygons are not drawn on top of the foreground polygons. I know that the answer to my problem is common knowledge to the 3D graphics programmer; Some people have told me to use OpenGL, but that is too much for me to learn right now; I just want to create basic 3D graphics. I am looking for a mathematical procedure to organize my polygons in the order that they should be drawn, (from back to front).
I have thought about just taking the average distance to all points of each polygon, but that is an unreliable method. I have been using trigonometry for all of my methods, but I am starting to learn some linear algebra concepts; The use of vectors may be helpful in finding which polygons lie in front.
#Raisintoe, In computer science, binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of objects within the space by means of a tree data structure known as a BSP tree.
Binary space partitioning was developed in the context of 3D computer graphics,1 where the structure of a BSP tree allows spatial information about the objects in a scene that is useful in rendering, such as their ordering from front-to-back with respect to a viewer at a given location, to be accessed rapidly. Other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD,[3] collision detection in robotics and 3-D video games, ray tracing and other computer applications that involve handling of complex spatial scenes.
See the Wikipedia article here
This approach has been used by video games mega tubes such as Quake. You can find more about it in this excellent article by Michael Abrash where he explains how they used BSP tree in Quake to determine Quake's visible surfaces.
I Hope this helps
Yes I do agree, OpenGL is really complex, and especially modern openGL that forces you to use shaders always can get more in your way of getting things done, than actually helping you. But openGl solves this problem for you. It draws each pixel of the polygon with it's depth value. When you draw the second polygon, the pixel is only updated when it's depth value is closer to the camera than the old one. You can do the same, and you will have a pixel perfect result.
side note: Modern games engines even prefer rendering from the front to the back, because then the expensive pixel calculation in the fragment shader can be skipped for pixels that would be overdrawn anyway.
side note 2: actually you have to enable the depth test and explicitly tell, that you want the closest pixels.
i am writing a 3d modeler similar to Blender for a game i am making. Since programs like blender export very complicated file types with alot of unneeded data i wanted to write a simple editor for my game. what i cannot figure out is how to map a point from a 2d projection on the window to where i have clicked in the 3d world with the world being rotated.
If anyone knows any good tutorials on how to do this or the method any help would be appreciated. I know i could use ray tracing but that would be to complicated i think.
The two main methods of mouse picking are:
Intersection Testing
Color Picking
Intersection tests are the more popular of the two, and at the simplest level involves 'shooting' out a ray and checking if it has intersected any points. The ray can also be replaced by a polytope if one wants to achieve more sensitive picking (useful for choosing points on vertices).
Color picking involves disabling AA, blending, shadows, etc. and re-drawing the scene using solid colors for the objects. glReadPixels is then used to find the color at the point of the mouse and this color is used to determine if it clicked on an applicable object.
Ray Picking:
Mouse Ray Picking Explained
Picking, Alpha Blending, Alpha Testing, Sorting
Color Picking:
OpenGL Selection Using Unique Color IDs
Picking Tutorial
The term you are looking for is mouse picking.
The method you need is gluUnProject. You'll need window x,y and the depth.
I think, in your case, it might be a lot easier to write a simple exporter for Blender.