I was wondering, is there any way to remove artifacts from the CameraGroupStrategy when having multiple texture regions overlapping with transparency. Is there another group strategy that i can use with perspective camera or can someone suggest a shader that I can use or a whole new group strategy? Please. Thank you.
Since they are in the exact same space in 3D, reducing your camera's depth won't help. You will need to turn off depth testing. The downside is that your decals cannot be automatically obscured by 3D models in your scene. And you must sort them properly if they are opaque (translucent decals have to be sorted anyway). Make your own class and copy all of CameraGroupStrategy's code into it. Remove the line
Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
from the beforeGroups method.
The other tricky part will be sorting them. Decals need to be sorted from far to near if depth testing is turned off (which it already is for blended decals in CameraGroupStrategy), but your decals that are in the same place may not consistently sort in the same order as the camera moves, which will cause flickering as they are drawn in various different orders.
You might want to completely remove the Sorter from this class and the call to contents.sort(cameraSorter). And then take pains to submit your decals in a consistent order, putting farther away groups of decals first, but keeping them in their same order if they are in the same plane.
Or, you could subclass Decal to add an extra int parameter, and call it for example planePosition. Then your sorter could try to detect very close decals and defer to their planePosition if it thinks they are very close. Something like this:
sorter = new Comparator<Decal>() {
#Override
public int compare (Decal o1, Decal o2) {
float dist1 = camera.position.dst(o1.position);
float dist2 = camera.position.dst(o2.position);
float diff = dist2 - dist1;
if (o1 instanceof MyDecal && o2 instanceof MyDecal && Math.abs(diff) < 0.001f)
return (int)Math.signum(((MyDecal)o2).planePosition - ((MyDecal)o1).planePosition);
return (int)Math.signum(diff);
}
}
If you are also using this for opaque decals that might be coplanar, then you need to also need to sort them, since depth testing is off. And note that they will be doing overdraw on each other since there is no depth testing, which may have a performance impact.
Related
I am very new to this ARCore and I have been looking at the HelloAR Java Android Studio project provided in the SDK.
Everthing works OK and is pretty cool, however, I want to place/drop an object when I touch the screen even when no planes have been detected. Let me explain a little better...
As I understand ARCore, it will detect horizontal planes and ONLY on those horizontal planes I can place 3D objects to be motion tracked.
Is there any way (perhaps using PointCloud information) to be able to place an object in the scene even if there are no horizontal planes detected? Sort of like these examples?
https://experiments.withgoogle.com/ar/flight-paths
https://experiments.withgoogle.com/ar/arcore-drawing
I know they are using Unity and openFrameworks, but could that be done in Java?
Also, I have looked at
How to put an object in the air?
and
how to check ray intersection with object in ARCore
but I don't think I'm understanding the concept of Ancor (I managed to drop the object on the scene, but it either disappears immediately or it is just a regular OpenGL object with no knowledge about the real world.
What I want to understand is:
- How and is it possible to create a custom/user defined plane, that is, a plane that is NOT automatically detected by ARCore?
- How can I create an Ancor (the sample does it in the PlaneAttachment class, I think) that is NOT linked to any plane OR that is linked to some PointCloud point?
- How do I draw the object and place it at the Ancor previously created?
I think this is too much to ask but looking at the API documentation has not helped me at all
Thank you!
Edit:
Here is the code that I added to HelloArActivity.java (Everything is the same as the original file except for the lines after // ***** and before ...
#Override
public void onDrawFrame(GL10 gl) {
...
MotionEvent tap = mQueuedSingleTaps.poll();
// I added this to use screenPointToWorldRay function in the second link I posted... I am probably using this wrong
float[] worldXY = new float[6];
...
if (tap != null && frame.getTrackingState() == TrackingState.TRACKING) {
// ***** I added this to use screenPointToWorldRay function
worldXY = screenPointToWorldRay(tap.getX(), tap.getY(), frame);
...
}
...
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (PlaneAttachment planeAttachment : mTouches) {
...
}
// ***** This places the object momentarily in the scene (it disappears immediately)
frame.getPose().compose(Pose.makeTranslation(worldXY[3], worldXY[4], worldXY[5])).toMatrix(mAnchorMatrix, 0);
// ***** This places the object in the middle of the scene but since it is not attached to anything, there is no tracking, it is always in the middle of the screen (pretty much expected behaviour)
// frame.getPose().compose(Pose.makeTranslation(0, 0, -1.0f)).toMatrix(mAnchorMatrix, 0);
// *****I duplicated this code which gets executed ONLY when touching a detected plane/surface.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
...
}
You would first have to perform a hit test via Frame.hitTest and iterate over the HitResult objects until you hit a Point type Trackable. You could then retrieve a pose for that hit result via HitResult.getHitPose, or attach an anchor to that point and get the pose from that via ArAnchor.getPose (best approach).
However, if you want to do this yourself from an arbitraty point retrieved with ArPointCloud.getPoints, it will take a little more work. In this approach, the question effectively reduces down to "How can I derive a pose / coordinate basis from a point?".
When working from a plane it is relatively easy to derive a pose as you can use the plane normal as the up (y) vector for your model and can pick x and y vectors to configure where you want the model to "face" about that plane. (Where each vector is perpendicular to the other vectors)
When trying to derive a basis from a point, you have to pick all three vectors (x, y and z) relative to the origin point you have. You can derive the up vector by transforming the vector (0,1,0) through the camera view matrix (assuming you want the top of the model to face the top of your screen) using ArCamera.getViewMatrix. Then you can pick the x and z vectors as any two mutually perpendicular vectors that orient the model in your desired direction.
I'm making a program that needs to detect the collision between 2 non-axis aligned boxes. My program only needs an indication if 2 non-axis aligned boxes are colliding. I would like to have the most simple and efficient algorithm possible.
Here I visualized the problem.
So as you can see squares 1,2 and 3 would return true because they collided with the green squares.
4 would return false because it isn't colliding.
I do have all the boxes of both colors in separate array lists.
Does anybody know a library or algorithm for this problem? Thanks in advance.
Check out the Area class in the java.awt.geom package.
http://docs.oracle.com/javase/6/docs/api/java/awt/geom/Area.html
I don't know how "easy" your game really is, how many shapes you'd have to check for (I'm thinking efficiency here), but if you have your different color shapes in different lists, a kind of brute force iteration may work for you. Not a clue if it would be efficient enough for you. I use box2D to tell me the collisions, but sounds like that may be overkill for you.
The brute force method I'm thinking of would be to utilize libgdx's Intersector class (check out the API, it has lots of methods). Iterate through your rectangles comparing to the others. Something like IntersectRectangles() gives you a boolean if two rectangles overlap (ie: collide).
This may be too inefficient/hacky, and a physics library may be too much. So one of the other answers provided may be the sweet spot.
A commonly-used approach involves quadtrees. There's a good write-up and tutorial here, which explains how to use quadtrees to perform collision detection in 2D space.
The general idea is that your game area will keep being partitioned by four as your add objects. Each partition is called a node and each node will maintain a reference to objects that exist in the corresponding partition. Objects are placed into nodes based on where they are in the 2D space. If a node does not fit cleanly into a partition, it is inserted into the parent node. Using this method, you don't have to perform an expensive check against every other object in your 2D space, because you can be sure that objects in different nodes (at the same level; i.e., sibling nodes) will not collide. So you only have to perform your collision detection on a small subset of objects.
Note that this just tells you which objects are occupying a certain area; it's a more efficient way to hone in on objects that are likely to be colliding. After that you have to check if the objects are actually colliding. There is another write-up here that goes over various techniques to accomplish this.
There are two algorithms/data-structures you need to consider for this problem:
A spatial data-structure to store your rotated quads, to efficiently determine which pairs of quads need to be tested against each other. Other answers have already addressed this. If the number of quads is small enough then you can just test all the red quads against all the green quads, which is O(m * n).
An algorithm to perform the actual test of one rotated quad against another. One of the simplest is the Separating Axis Theorem.
The basic idea behind the SAT is that if you can find at least one line where all the points of one convex object are on one side of it, and all the points of the other are on the other side, then the two are not colliding. The potential lines that you need to test are just the edges of both of the objects.
To implement it you need to implement a point-line test to tell you which side of an edge a point is on. This is done by calculating the normal to the edge, and then calculating the dot product of the edge normal and a vector from a point on the edge to the point you are testing. The sign of the dot product tells you which side the point is on (positive means the outside the edge, for an outward pointing normal). Whether you count zero (on the line) as outside or inside depends on whether you want objects that are just touching but not penetrating to count as a collision, if you do then the dot product must be greater than zero to count as outside.
For example if the points on objects are in clockwise order, and edgeA and edgeB are the two points of an edge on one object, and pointC is a point on the other object, the test is done like this (not using function calls, to show the math):
boolean isOutsideEdge(PointF edgeA, PointF edgeB, PointF pointC)
{
float normalX = edgeA.Y - edgeB.y;
float normalY = edgeB.X - edgeA.x;
float vectorX = pointC.x - edgeA.x;
float vectorY = pointC.y - edgeA.y;
return (normalX * vectorX) + (normalY * vectorY) > 0.0f;
}
Then the algorithm is:
For each edge on quad A, if all the corner points on quad B are on the outward facing side of the edge, then A and B are not colliding, stop processing and return false.
For each edge on quad B, if all the corner points on quad A are on the outward facing side of the edge, then A and B are not colliding, stop processing and return false.
If all those tests have been performed and none have returned false, then A and B are colliding, so return true.
The SAT can be generalized to arbitrary convex polygons.
So I decided to go with box2d in the end. This was the best solution because of the diffrent mask-qualifiers, the objects didn't collide but it could be easily checked wether they should be colliding.
I had to make my own contactlistener that overrides the default contactlistener. Here I could do anything if any 2 objects collided.
Thanks everyone for the help.
If I draw something with coordinations like -80 and -90 will it affect performance same way as if it was actually drawn inside?
Is it actually worth it checking if the final image will appear on screen?
(and not drawing it if won't)
If I draw something with coordinations like -80 and -90 will it affect performance same way as if it was actually drawn inside?
Somewhat, but not nearly as much as if it is inside the screen.
Is it actually worth it checking if the final image will appear on screen? (and not drawing it if won't)
It's practically never worth implementing your own culling/clipping in a library where drawing out of bounds isn't an error/access violation, since the library would already have to make that check to avoid writing to memory out of bounds, and it would generally be wise to bet that the library's way of checking this is smart and fast.
So if you were to add your own basic check on top, now you're just making the regular, on-screen drawing perform two of such checks (your own on top of whatever is going on under the hood), and for off-screen cases, it would be likely that your check would actually be slower (or at least no better) than the library's.
Now I have to place emphasis on basic culling/clipping here. By basic, I mean checking for each shape you draw on a per-shape basis. There you'll just more likely damage performance.
Acceleration Structures and Clipping/Culling in Bulk
Yet there are cases where you might have a data structure to do efficient culling of thousands of triangles at once with a single bounding box check to see if it's in the frustum, for example, in a 3D case with structures like bounding volume hierarchies. Games use these types of data structures to massively reduce the amount of drawing requests required per frame with very few checks, and there you do gain a potentially massive performance benefit. A more basic version of this is simply check if the object/mesh containing the triangles has a bounding box that is inside the screen, eliminating potentially thousands of triangles from being culled individually with a single bounding box check.
In 2D with clipping, you might be able to use something like a quad tree or fixed grid to only selectively draw what's on the screen (and also accelerate collision detection or click-detection, e.g.). There you might actually get a performance boost if you can eliminate many superfluous drawing calls with a single check. But again, that's using a data structure that eliminates a boatload of unnecessary drawing calls with a single check. These are spatial partitioning structures whose sole point is to avoid checking things on a per-shape basis.
For a more basic 2D example, if you have say, a 2D "widget" which, in order to draw it, involves drawing dozens of different shapes to the screen, you might be able to squeeze a performance gain if you can avoid requesting to draw dozens of shapes with a single check to see if the rectangle encompassing the entire widget is in the screen. Again, there you're doing one check to eliminate many drawing calls. You won't get a performance gain on a level playing field where you're doing that check on a per-shape basis, but if you can turn many checks into a single check, then you have a chance.
According to the Graphics implementation for most common draws/fills (i.e. drawRectangle see: source of Graphics on grepcode.com they start with checking if the width and height are bigger then zero and then are doing more operations, therefore doing check for x,y < 0 are in doing the same number of operations in worst case.
Keep in mind that a rectangle starting at -80 and -90 as you said but width and height i.e. 200 will be displayed on screen.
Yes it will still affect the performance as it still does exist within the program, it's just not visible on the screen
I am attempting to draw two textures to 3D space that containing transparency. When they do not overlap they work fine:
However when one texture overlaps the other the the transparency means that you can see through the one behind:
I use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA when initialising blending.
You need to either depth sort or use alpha testing:
glEnable(GL_ALPHA_TEST);
glAlphaTest(GL_GREATER, 0.0f);
which will only draw pixels that have an alpha value of more than 0f. However, this doesn't work for blending transparent pixels. Andon's solution is the one that I use, although I work in 2D and I have to have transparency for smoke effects.
One possibility is to use the discard keyword in the fragment shader, as the alpha test is no longer with us. This has the disadvantage of having aliased edges of objects.
Another possibility is to depth-sort the objects and draw back to front. Obvious disadvantage is having to perform the transformations and the sorting in the first place. This can be sometimes avoided if the order of the objects can be determined statically (when the camera doesn't change much). Another disadvantage is overdrawing of the shaded pixels by something different, therefore throwing away performance.
Finally, you can use alpha-to-coverage, where the antialiassing hardware is employed to take care of the transparency. This doesn't require sorting and makes the edges of the objects smooth. The disadvantage is that this is enabled per rendering context and may not be available everywhere.
Lets assume your eye is in the surface point P1 on an object A and there is a target object B and there is a point-light source behind object B.
Question: am i right if i look to the light source and say "i am in a shadow" if i cannot see the light because of object B ?. Then i flag that point of object A as "one of the shadow points of B on A" .
If this is true, then can we build a "shadow geometry"(black-colored) object on the surface of A then change it constantly because of motion of light,B,A, etc... in realtime ? Lets say a sphere(A) has 1000 vertices and other sphere (B)has 1000 vertices too, so does this mean 1 milion comparations? (is shadowing, O(N^2) (time) complexity?). I am not sure about the complexity becuse the changing the P1(eye) also changes the seen point of B (between P1 and light source point). What about the second-order shadows and higher (such as lights being reflecting between two objects many times) ?
I am using java-3D now but it doesnt have shadow capabilities so i think of moving to other java-compatible libraries.
Thanks.
Edit: i need to disable the "camera" when moving the camera to build that shadow. How can i do this? Does this decrease the performance badly?
New idea: java3D has built-in collision detection. I will create lines(invisible) from light to target polygon-vertex then check for a collision from another object. If collision occurs, add that vertex corrd. to the shadow list but this would work only for point-lights :( .
Anyone who supplys with a real shade library for java3d, will be much helpful.
Very small sample Geomlib shadow/raytracing in java3D would be the best
Ray-tracing example maybe?
I know this is a little hard but could have been tried by at least a hundred people.
Thanks.
Shadows is probably the most complex topic in 3D graphics programming, and there are many approaches, but the best option should be identified according to the task requirements. The algorithm you are talking about is the simplest way to implement shadows from a spot light source onto the plane. It should not be done on the CPU, as you already use GPU for 3D rendering.
Basically the approach is to render the same object twice: once from the camera view point, and once from the light source point. You will need to prepare model view matrices to convert between these two views. Once you render the object from the light point, you get the depth map, in which each point lies closest to the light source. Then, for each pixel of the normal rendering, you should convert its 3D coordinates into the previous view, and check against the corresponding depth value. This essentially gives you a way to tell which pixels are covered by shadow.
The performance impact comes from rendering the same object twice. If your task doesn't assume high scalability of shadow casting solution, then it might be a way to go.
A number of relevant questions:
How Do I Create Cheap Shadows In OpenGL?
Is there an easy way to get shadows in OpenGL?
What is the simplest method for rendering shadows on a scene in OpenGL?
Your approach can be summarised like this:
foreach (point p to be shaded) {
foreach (light) {
if (light is visible from p)
// p is lit by that light
else
// p is in shadow
}
}
The funny fact is that's how real-time shadows are done today on the GPU.
However it's not trivial for this to work efficiently. Rendering the scene is a streamlined process, triangle-by-triangle. It would be very cumbersome if for every single point (pixel, fragment) in every single triangle you'd need to consider all other triangles in other to check for ray intersection.
So how to do that efficiently? Answer: Reverse the process.
There's a lot fewer lights than pixels on the scene, usually. Let's take advantage of this fact and do some preprocessing:
// preprocess
foreach (light) {
// find all pixels p on the scene reachable from the light
}
// then render the whole scene...
foreach (point p to be shaded) {
foreach (light) {
// simply look up into what was calculated before...
if (p is visible by the light)
// p is lit
else
// p is in shadow
}
}
That seems a lot faster... But two problems remain:
how to find all pixels visible by the light?
how to make them accessible quickly for lookup during rendering?
There's the tricky part:
In order to find all points visible by a light, place a camera there and render the whole scene! Depth test will reject the invisible points.
To make this result accessible later, save it as a texture and use that texture for lookup during the actual rendering stage.
This technique is called Shadow Mapping, and the texture with pixels visible from a light is called a Shadow Map. For a more detailed explanation, see for example the Wikipedia article.
Basically yes, your approach will produce shadows. But doing it point by point is not feasible performance wise (for realtime), unless its done at the GPU. I'm not familiar with what the API's offer today, but I'm sure any recent engine will offer some shadow out of the box.
Your 'New idea' is how shadows were implemented back in the days when rendering was still done with the CPU. If the number of polygons isn't too big (or you can efficently reject entire bunches by having grouping volumes etc.) it can be done with fairly little CPU power.
3D shadow rendering on vanilla Java is never going to be efficient. You best use graphical libraries written to utilize the full capabilities range of the graphical card, such as OpenGL or DirectX. As you are using Canvas (from the screenshot you provided), you can even paint that Canvas from native code using JNI. So you could use all the technology from graphial libraries, do just a little fiddling and paint your Canvas directly from the native code. There would be very little work involved to make it work, compared to writing your own 3D engine.
Wiki link about AWT native access: http://en.wikipedia.org/wiki/Java_AWT_Native_Interface
Documentation: http://docs.oracle.com/javase/7/docs/technotes/guides/awt/AWT_Native_Interface.html