Using Java 2D I've patched several Bezier curves (CubicCurve2D) together to create a "blob". The problem I now face is how to:
Efficiently fill the blob with a given colour.
Efficiently determine whether a given point lies inside the blob.
I noticed thst CubicCurve2D implements Shape which provides numerous contains methods for determining "insideness" and that Graphics2D is able to fill a Shape via the fill(Shape) (which I believe uses Shape's getPathIterator methods to do this).
Given this I was hoping I could create a composite Shape, whereby my getPathIterator(AffineTransform) method would simply link the underlying PathIterators together. However, this is producing a NoSuchElementException once my shape contains more than one CubicCurve2D. Even if I do manage to achieve this I'm not convinced it will work as expected because a CubicCurve2D is always filled on the convex side, and my "blob" is composed of concave and convex curves. The "contains" problem is even harder as a point can legitimately lie within the blob but not within any of the individual curves.
Am I approaching this problem in the correct way (trying to implement Shape?) or is there an idiomatic way to do this that I'm unaware of? I would have thought that the problem of compositing geometric shapes would be fairly common.
Does anyone have any suggestions regarding how to solve this problem?
Thanks in advance.
I'm not sure I understand your question but composite shapes can be created with the class java/awt/geom/Area.
Looking to Shape for a solution is the right way to go about this. If you have a collection of curves that you are trying to assemble into a shape, I would suggest that you use a GeneralPath. Simply add your curves, or straight line segments, as required. Look to the interface to see the various append methods. Also note that you can 'complete' the shape by joining the last point to the starting point.
Once the path is closed, there are a number of different versions of contains() that can be used, please take the time to read each of their descriptions, as there are trade-offs in terms of speed and accuracy, depends on your application.
Also it is easy to get a shape from the path, and fill it, transform it, etc.
Related
following on from my previous post , I have a separate issue that I want to check.
I want to redo my calculations of having objects orbit a sphere on their own unique orbit, at various heights (radius) and angles (orbital plane). My previous post explains the method I am using, however I am getting quite a bit of unexpected behaviour, which I think is due to the mapping method I use. Objects start "turning" on their own and going off in changing directions when uncommanded.
SETUP:
I have a flat 2D grid of 1000x1000 where I keep track of objects
I then map these to the sphere and convert to 3D coords.
However, this is probably causing issues with a flat 2x2 not being able to be wrapped onto a sphere without huge distortion, so need to convert it to Mercator Projection, then wrap that onto the sphere.
Before it gets too complicated, would it be far easier to just deal with Matrix4 or Quaternions and represent everything by rotations instead? I still need to keep track of all objects, and the position on the sphere (for simplicity, lets say on the surface), but I need to be able to modify the objects orbits. For example, modify their height, or direction (orbital plane).
Can someone suggest a cleaner way to represent these valyes locally? I can see this getting very messy otherwise.
Many thanks,
J
This question already has answers here:
Visualize vector graphics in Java, which library?
(2 answers)
Closed 8 years ago.
I am currently trying to make a game using java, and I wanted to use vector graphics to avoid the blocky feel of many other current games. I've looked into using some of the various Shape implementations in java like Path2D and Area. The problem is is that neither one has all the functionality I need. Here is what I am looking for:
I need to be able to create it using vector graphics based methods (lineTo(), moveTo(), etc.)
I need to be able to draw it onto a JPanel
I need to be able to test whether two instances intersect, or at the very least, whether one contains a point.
Finally, I need to be able to merge them together or subtract one from another.
I know this is kind of a long shot, but I was hoping someone might know of a library or something that has this functionality.
As far as libraries go, I don't have an answer.
I might have missed a point, but as far as I know, Graphics2D Shapes and areas have all the functions you listed. What do they lack you need ?
Of course, JavaFx has all you asked for, but if you want to write a Swing app:
I did not use them for a game, but for an editor, and here is what I would suggest:
create your own graphic element class. Use Shapes and area to implement them, and make them Composite elements.
for collision, have a getArea() method on your elements. The area can be the union of all areas. that represent your element.
in real life, it might be a bit slow, however... Consider doing some pre-rendering as bitmap pictures (maybe when starting the game, when you know the resolution ?)
Lets assume your eye is in the surface point P1 on an object A and there is a target object B and there is a point-light source behind object B.
Question: am i right if i look to the light source and say "i am in a shadow" if i cannot see the light because of object B ?. Then i flag that point of object A as "one of the shadow points of B on A" .
If this is true, then can we build a "shadow geometry"(black-colored) object on the surface of A then change it constantly because of motion of light,B,A, etc... in realtime ? Lets say a sphere(A) has 1000 vertices and other sphere (B)has 1000 vertices too, so does this mean 1 milion comparations? (is shadowing, O(N^2) (time) complexity?). I am not sure about the complexity becuse the changing the P1(eye) also changes the seen point of B (between P1 and light source point). What about the second-order shadows and higher (such as lights being reflecting between two objects many times) ?
I am using java-3D now but it doesnt have shadow capabilities so i think of moving to other java-compatible libraries.
Thanks.
Edit: i need to disable the "camera" when moving the camera to build that shadow. How can i do this? Does this decrease the performance badly?
New idea: java3D has built-in collision detection. I will create lines(invisible) from light to target polygon-vertex then check for a collision from another object. If collision occurs, add that vertex corrd. to the shadow list but this would work only for point-lights :( .
Anyone who supplys with a real shade library for java3d, will be much helpful.
Very small sample Geomlib shadow/raytracing in java3D would be the best
Ray-tracing example maybe?
I know this is a little hard but could have been tried by at least a hundred people.
Thanks.
Shadows is probably the most complex topic in 3D graphics programming, and there are many approaches, but the best option should be identified according to the task requirements. The algorithm you are talking about is the simplest way to implement shadows from a spot light source onto the plane. It should not be done on the CPU, as you already use GPU for 3D rendering.
Basically the approach is to render the same object twice: once from the camera view point, and once from the light source point. You will need to prepare model view matrices to convert between these two views. Once you render the object from the light point, you get the depth map, in which each point lies closest to the light source. Then, for each pixel of the normal rendering, you should convert its 3D coordinates into the previous view, and check against the corresponding depth value. This essentially gives you a way to tell which pixels are covered by shadow.
The performance impact comes from rendering the same object twice. If your task doesn't assume high scalability of shadow casting solution, then it might be a way to go.
A number of relevant questions:
How Do I Create Cheap Shadows In OpenGL?
Is there an easy way to get shadows in OpenGL?
What is the simplest method for rendering shadows on a scene in OpenGL?
Your approach can be summarised like this:
foreach (point p to be shaded) {
foreach (light) {
if (light is visible from p)
// p is lit by that light
else
// p is in shadow
}
}
The funny fact is that's how real-time shadows are done today on the GPU.
However it's not trivial for this to work efficiently. Rendering the scene is a streamlined process, triangle-by-triangle. It would be very cumbersome if for every single point (pixel, fragment) in every single triangle you'd need to consider all other triangles in other to check for ray intersection.
So how to do that efficiently? Answer: Reverse the process.
There's a lot fewer lights than pixels on the scene, usually. Let's take advantage of this fact and do some preprocessing:
// preprocess
foreach (light) {
// find all pixels p on the scene reachable from the light
}
// then render the whole scene...
foreach (point p to be shaded) {
foreach (light) {
// simply look up into what was calculated before...
if (p is visible by the light)
// p is lit
else
// p is in shadow
}
}
That seems a lot faster... But two problems remain:
how to find all pixels visible by the light?
how to make them accessible quickly for lookup during rendering?
There's the tricky part:
In order to find all points visible by a light, place a camera there and render the whole scene! Depth test will reject the invisible points.
To make this result accessible later, save it as a texture and use that texture for lookup during the actual rendering stage.
This technique is called Shadow Mapping, and the texture with pixels visible from a light is called a Shadow Map. For a more detailed explanation, see for example the Wikipedia article.
Basically yes, your approach will produce shadows. But doing it point by point is not feasible performance wise (for realtime), unless its done at the GPU. I'm not familiar with what the API's offer today, but I'm sure any recent engine will offer some shadow out of the box.
Your 'New idea' is how shadows were implemented back in the days when rendering was still done with the CPU. If the number of polygons isn't too big (or you can efficently reject entire bunches by having grouping volumes etc.) it can be done with fairly little CPU power.
3D shadow rendering on vanilla Java is never going to be efficient. You best use graphical libraries written to utilize the full capabilities range of the graphical card, such as OpenGL or DirectX. As you are using Canvas (from the screenshot you provided), you can even paint that Canvas from native code using JNI. So you could use all the technology from graphial libraries, do just a little fiddling and paint your Canvas directly from the native code. There would be very little work involved to make it work, compared to writing your own 3D engine.
Wiki link about AWT native access: http://en.wikipedia.org/wiki/Java_AWT_Native_Interface
Documentation: http://docs.oracle.com/javase/7/docs/technotes/guides/awt/AWT_Native_Interface.html
I tried to find out thousands of point in million polygon via web services .At first i implemented the algorithm(Point in polygon) in java ,but it take a long time .And then i split the table in mysql and tried to using the multiple thread to solve it ,but still inefficiently. Is there any faster algorithm or implementation for solve this?
Plus description about the polygon. 2D ,static,complex polygon(also with hole).
Any Suggestion will be appreciate.
Testing a point against a million polygons is going to take a lot of time, no matter how efficient your point in polygon function is.
You need to narrow down the search list. Start by making a bounding box for each polygon and only selecting the polygons when the point is within the bounding box.
If the polygons are unchanging you could convert each polygon to a set of triangles. Testing to see if a point is in a triangle should be much faster than testing to see if it's in an arbitrary polygon. Even though the number of triangles will be much larger than the number of polygons, it might be faster overall.
If the collection of polygons is static it may be helpful to first register them onto a spatial data structure - an R-tree might be a good choice, assuming that the polygons do not overlap each other too much.
To test a point against the polygon collection the enclosing leaf in the tree would first be found (an O(log(n)) style operation) and then it would only be necessary to perform the full point-in-polygon test for the polygons that are associated with the enclosing leaf.
This approach should greatly speed up each point test, but requires an additional setup phase to build the R-tree.
Hope this helps.
If you deal with millions of polygons, you need some kind of space partitioning, or it's gonna be slow, no matter how optimized your hit-test function is or how many threads work on solving your query.
What kind of space partitioning ? it depends:
2D? 3D?
Is your polygon set static? If not, do it changes frequently?
What kind of request are you doing on this set?
What kind of polygon is it? Triangle? Convex? Concave? Complex? With holes?
We need more information to help you.
EDIT
Here is a simple space partitioning scheme.
Suppose there is a Cartesian grid over your 2D space with a given step.
When you add a polygon:
Compute its bounding box
Find all the grid cells that intersect with the bounding box
For each cell, add a line in a special table.
The table looks like this: cell_x, cell_y, polygon_id. Add the proper indexes (at least cell_x and cell_y)
Of course, you want to choose your grid step so most of the polygons lay in less than 10 cells, or else your cell table will quickly becomes huge.
It's now easy to find the polygons at a given point:
Compute in which cell your point belongs
Get all polygons associated to this cell
For each polygon, use your hit-test function
This solution is far from optimal, but easy to implements.
I thik here is the case where divide and conquer would do, you could try making subpolyons or simplifying some of the poonts, maybe try an heuristic approach, there are my 5 cents.
Hey, I'm currently trying to extract information from a 3d array, where each entry represents a coordinate in order to draw something out of it. The problem is that the array is ridiculously large (and there are several of them) meaning I can't actually draw all of it.
What I'm trying to accomplish then, is just to draw a representation of the outside coordinates, a shell of the array if you'd like. This array is not full, can have large empty spaces with only a few pixels set, or have large clusters of pixel data grouped together. I do not know what kind of shape to expect (could be a simple cube, or a complex concave mesh), and am struggling to come up with an algorithm to effectively extract the border. This array effectively stores a set of points in a 3d space.
I thought of creating 6 2d meshes (one for each side of the 3d array), and getting the shallowest point they can find for each position, and then drawing them separetly. As I said however, this 3d shape could be concave, which creates problems with this approach. Imagine a cone with a circle on top (said circle bigger than the cone's base). While the top and side meshes would get the correct depth info out of the shape, the bottom mesh would connect the base to the circle through vertical lines, making me effectivelly loose the conical shape.
Then I thought of annalysing the array slice by slice, and creating 2 meshes from the slice data. I believe this should work for any type of shape, however I'm struggling to find an algorithm which accuratly gives me the border info for each slice. Once again, if you just try to create height maps from the slices, you will run into problems if they have any concavities. I also throught of some sort of edge tracking algorithm, but the array does not provide continuous data, and there is almost certainly not a continuous edge along each slice.
I tried looking into volume rendering, as used in medical imaging and such, as it deals with similar problems to the one I have, but couldn't really find anything that I could use.
If anyone has any experience with this sort of problem, or any valuable input, could you please point me in the right direction.
P.S. I would prefer to get a closed representation of the shell, thus my earlier 2d mesh approach. However, an approach that simply gives me the shell points, without any connection between them, that would still be extremely helpful.
Thank you,
Ze
I would start by reviewing your data structure. As you observed, the array does not maintain any obvious spatial relationships between points. An octree is a pretty good representation for data like you described. Depending upon the complexity of you point set, you may be able to find the crust using just the octree - assuming you have some connectivity between near points.
Alternatively, you may then turn to more rigorous algorithms like raycasting or marching cubes.
Guess, it's a bit late by now to be truly useful to you, but for reference I'd say this is a perfect scenario for volumetric modeling (as you guessed yourself). As long as you know the bounding box of your point cloud, you can map these coordinates to a voxel space and increase the density (value) of each voxel for each data point. Once you have your volume fully defined, you can then use the Marching cubes algorithm to produce a 3D surface mesh for a given threshold value (iso value). That resulting surface doesn't need to be continuous, but will wrap all voxels with values > isovalue inside. The 2D equivalent are heatmaps... You can refine the surface quality by adjusting the iso threshold (higher means tighter) and voxel resolution.
Since you're using Java, you might like to take a look at my toxiclibs volumeutils library, which also comes with sevaral examples (for Processing) showing the general approach...
Imagine a cone with a circle on top
(said circle bigger than the cone's
base). While the top and side meshes
would get the correct depth info out
of the shape, the bottom mesh would
connect the base to the circle through
vertical lines, making me effectivelly
loose the conical shape.
Even an example as simple as this would be impossible to reconstruct manually, let alone algorithmically. The possibility of your data representing a cylinder with a cone shaped hole is as likely as the vertices representing a cone with a disk attached to the top.
I do not know what kind of shape to
expect (could be a simple cube...
Again, without further information on how the data was generated, 8 vertices arranged in the form of a cube might as well represent 2 crossed squares. If you knew that the data was generated by, for example, a rotating 3d scanner of some sort then that would at least be a start.