Rectangle Avoidance in Boids - java

I am making an adaptation of the classic Boids simulation from the 80s in Java. It works well enough, but I am trying to add a new rule to the behavior that would force the agents to avoid rectangles (walls) and I am not sure how to go about this.
I have seen this thread :
https://gamedev.stackexchange.com/questions/45381/wall-avoidance-steering
But I am confused by the syntax used (like partsList[j] -> normal) in the final code presented and how to obtain the distance between the agent and the rectangle, as well as how to actually drive the agents away. The formula makes sense though. Could someone please explain it to me? Thank you very much!
P.S. I have been following this pseudocode and I also used this Java source code as a reference.
Edit: Okay, I see why I was confused with the syntax, but I am still in the dark when it comes to writing the wall avoidance rule.

Ah, I remember the 80s, and the boids simulation…
Generally in boids-like steering behaviors the idea is to apply a behavioral “steering force” against the velocity (momentum) of the agent. So implementing any given steering behavior comes down to finding some geometric construction that generates a vector pointing in the direction you want to turn. Ideally these will be tangential steering forces (perpendicular to the current velocity) so that steering is independent of speed control.
In the case of avoiding a wall—and a rectangle can be thought of as four walls—the general idea is to take a vector pointing away from (normal to) the wall. Using projection (dot product) you can separate out the components of that force that are parallel to and perpendicular to the velocity vector. The component of the wall normal that is perpendicular to the agent’s velocity is a steering force that will turn the agent away from the wall.
The other aspect is knowing when to use this wall avoidance behavior. A useful approach is to choose a time horizon, say 2 seconds, and decide if the agent would hit the wall within that time. Using current position, velocity, and that time value, you can do a simple linear prediction of where the agent will be in 2 seconds. If it crosses the wall during that interval then it ought to be using its wall avoidance behavior.
For more information, look up “containment” in that GDC 99 paper, and/or look at these:
http://natureofcode.com/book/chapter-6-autonomous-agents/
https://gamedevelopment.tutsplus.com/series/understanding-steering-behaviors--gamedev-12732

Related

cost / mapping function for determining center of object based on detected features

I wrote an object tracker that will try to detect and follow a moving object in a recorded video. In order to maximize the detection rate, my algorithm is using a bunch of detection & tracking algorithms (cascade, foreground & particle tracker). Each tracking algorithm will return me some point of interest that might be part of the object that I'm trying to track. Let's assume (for the simplicity of this example) that my object is a rectangle and that the three tracking algorithms returned the points 1, 2 and 3:
Based on the relation / distance of these three points it is possible to calculate the center of gravity (blue X in above image) of the tracked object. So for each frame I might be able to come up with some good estimate of the center of gravity. However, the object might move from one frame to the next:
In this example I merely rotated the original object. My algorithm will give me three new points of interest: 1',2' and 3'. I could again calculate the center of gravity based on these three new points, but I would throw away important information that I've acquired from the previous frame: based on points 1, 2 and 3 I already do know something about the relationship of these points and thus by combining the information from 1, 2 and 3 and 1',2' and 3' I should be able to come up with a better estimate of the center of gravity.
Furthermore, the next frame might yield a forth data point:
This is what I would like to do (but I don't know how):
based on the individual points (and their relationship to each other) that are returned from the different tracking algorithms, I want to build up a localization map of the tracked object. Intuitively I feel like I need to come up with A) an identification function that will identify individual points across frames and B) some cost function that will determine how similar tracked points (and the relationship / distance between them) are from frame to frame, but I can't get my head around on how to implement this. Alternatively, maybe some kind of map buildup based on the points will work. But again, I don't know how to approach this.
Any advice (and example code) is highly appreciated!
EDIT1
a simple particle filter might probably work too, but I again don't know how to define the cost function. A particle filter for tracking a certain color is easy to program: for each pixel you calculate the difference between target color and pixel color. But how would I do the same for estimating the relationship between tracked points?
EDIT2 intuitively I feel like Kalman filters could also help with the prediction step. See slides 24 - 32 of this pdf. Or am I misled?
What I think you're trying to do is essentially build up a state space of features, which can be applied to a filtering process, such as an Extended Kalman Filter. This is a useful framework when you have multiple observations in every frame, and you're trying to estimate or measure something indicated by these observations.
To determine the similarity of the tracked points, you can perform simple template matching from frame to frame for small regions around the points. One way of doing this is to extract an NxN (say, 7x7) region around point a in frame n and point a' in frame n+1, followed by normalised cross correlation between the extracted regions. This will give you a reasonable measure of how similar the patches are. If the patches are not similar, then you've probably lost track of that point.
There is an enormous literature on this and related problems starting in the 80's. Try searching for "optical flow" algorithms". The input for such algorithms is two successive frames of the same scene. The output is a vector field, one vector per pixel in the second image, which shows what the direction and speed of movement of the feature in that field. This presentation is a pretty nice summary.
A nice thing about optical flow is that many algorithms for it parallelize nicely and map onto your favorite video card GPU, so they can run in real time. Think ESPN overlays.
According to me, in order to identify who is who in each frame, you will have to use a greater dimension. For example if you want to know which point is where between two frame (considering your extracted point are same), you will have to build vectors or simplex and then deduce an organisation between your points (like angles values).
The main problem is that combinations increase with point number. If your camera is a fixed point then, you could use background as a reference in order to deduce object rotations and translations, i mean build vectors between background interest points and object points in order to clearly identify them.
hope that help go forward.
I would recommend looking in to the divided difference filter (DDF), which is similar to the extended Kalman filter (EKF), but does not require an approximate model of the dynamics of your system (which you may not have). Basically the DDF approximates the derivatives used in the EKF using a difference equation. There are plenty of papers online about this, but I do not know whether you have access to them so I have not linked them here. If you are working from a university or a company that has access to online journals (like IEEE Explore), then just Google "divided difference filter" and check out some of the papers.

explanation of a wave

I just started experimenting with Robocode and read about waves: http://robowiki.net/wiki/Wave
what I don't understand is, why circles are used here.
i mean, when I shoot a bullet I only shoot it in one single direction and not in every direction as being implied when using a circle.
can anyone try to explain that circumstance and that concept in other words to me?
I'm just being stuck right now..
thanks,
Julian
The above answers hit the main points of why waves are a useful abstraction: as an optimization for collecting firing angles that would hit the target, relative to firing directly at the enemy.
Another use of waves is in bullet dodging movements. When you see the enemy fire a bullet (by monitoring its energy), you know the bullet's origin and speed, but not its exact location, as you can't see bullets. In this case, the wave represents what you know about the bullet: all its possible locations. If you get hit, you can similarly deduce the relative firing angle the enemy used. Later, you can use that data to evaluate the danger of different points on each wave and decide the safest place to intersect the wave (aka "Wave Surfing").
It looks like the wave approach is meant as some optimization of a naive implementation.
The basic concept would then be to determine the point in time when the projectile passes the target. This can simply be done by comparing the distance the 'wave' travelled from its origin ("wave_velocity * (time_now - time_fired)") to the distance of the target to the origin of this wave.
Once the two distances become equal (or the wave passes the target), the bearing from the wave's origin to the target's current location can be calculated and compared to the bearing of the projectile. If these two bearings are close enough to each other the target is considered 'hit'; otherwise the target was missed and the projectile can be disregarded for further calculations. (Assuming the target cannot move faster than the projectile.)
The optimization in this is that for every time-step only a couple of distances have to be calculated and compared to determine if the actual 'hit-check' needs to be performed. This way the projectiles need not be traced exactly in two (or three) dimensions but only in a single one (distance) which may save a significant amount of computation.

jMonkey optimization similar to Java3D's

Edit: For having real-time drawing, started using lwjgl which is base of jmonkeyengine and jocl in an "interoperability" between opengl and opencl, now can calculate and draw 100k particles real-time. Maybe mantle version of jmonkey engine can cure this drawcall overhead problem.
For several days, I have been learning jMonkey engine(ver:3.0) in Eclipse(java 64 bit) and trying how to optimize a scene with using GeometryBatchFactory.optimize(rootNode); command.
Without optimization(with capability of changing spheres positions):
Okay, only 1-fps is originated from both pci-express bandwidth+jvm overhead.
With optimization(without capability to change positions of spheres):
Now it is 29 fps even with increased triangle number.
Java3D had a setCapability() method which makes a scene object be able to be read/written even in an optimized form. jMonkey engine 3.0 must be capable of this subject but I couldn't find any trace of it(searched tutorials and examples, failed).
Question: How can I set read/write position/rotation/scale capabilities of optimized nodes of a scene in jMonkey 3.0? If you cannot give an answer to first question, can you tell me why triangle numbers increase when I use optimization command? Do I have to create a new method to access the graphics card and change the variables myself(jogl maybe?)?
Scene information: 16k particles(spheres of 16x16 res) + 1 point light(and its 4096 resolutioned shadow).
I'm sure we can send several thousands of float numbers in a millisecond through pci-express with ease.
Additional info: I'm using Aparapi-kernels to update particle
positions which takes 10 milliseconds(16k * 16k interactions to
calculate forces).(does not change anything in optimized mode :( )
Can aparapi access those optimized data?
For the case of batchNode.batch(); optimization, here is 1 fps again with lessened object-numbers:
Object number is now only several hundreds but fps is still at 1!
Sending just sphere positions to gpu and letting it calculate the vertex positions could be better than calculating vertexes on cpu plus sending huge data to gpu.
No-one here to help? Already tried batchNode but did not help enough.
I dont want to change 3d api because jMonkey people already reinvented the wheel and I'm happy with current situation. Just trying to squeeze a little more performance(canceling shadows gives %100 speed but quality is important too!).
This java program will become an asteroid-impact scene simulator(there will be choice of asteroid size,mass,speed,angle) with marching-cubes algorithm with LOD(will be millions of particles).
Marching-cubes algorithm would decrease the triangle numbers greatly. If you couldnt give any answer the question, any marching-cubes(or any O(n) convex hull) algorithm for java will be accepted! Data: x,y,z arrays as source and triangle-strip-array as target(iso-surface mesh points)
Thanks.
Here are some samples about the stream(with a much lower resolution):
1)Collapsing of a cube-shaped rock-group by gravitation:
2)Exclusion force starts to show itself:
3)Exclusion force + gravitation makes the group form a more smooth shape:
4)Group forms a sphere(as expected):
5)Then, a big stellar body approaches:
6)About to touch:
7)The moment of impact:
With help of Barnes-Hutt algorithm and a truncated potential, particle numbers will be 10x(maybe 100x) more.
Rather than Marching-Cubes algorithm, a ghost cloth which wraps the nbody can give a low-resolutioned hull(more easier than BH but need more computation)
Ghost cloth will be affected by nbody(gravity + exclusion) but nbody will not be affected by cloth which wraps it. Nbody wont be rendered but cloth mesh will be rendered with lower triange count.
If MC or above works, this will let the program render a wrapping-cloth for ~200x more particles.
So sorry....
You can batch all Geometries in a scene (or a subnode) that remains static.
Batching means that all Geometries with the same Material are combined into one mesh. This optimization only has an effect if you use only few (roughly up to 32) Materials total. The pay-off is that batching takes extra time when the game is initialized
The change in triangles therefore is because they have been all assembled into one mesh.... The only suggestion if this is necessary, is trying to get the mesh and altering points on it, but at that point I don't think it makes sense.
Perhaps try a different optimization method.
Good luck, haven't used JMonkey in a bit, but glad to see others do and its continued growth!
EDIT
BTW, a way to minimize the math might be to use half a sphere of cubes, an impact on the earth likely wouldn't affect the other side (unless the sphere isn't the earth but already a small sample of the earth taken as a sphere)...
Perhaps try a 2d shape as the impact surface, though I know this won't be your best choice, it might give you an idea of how the number of shapes might have an affect and how grand. If it does then an avenue might be to consider how to remove some of the particles, if it doesn't you need not worry. I am almost sure it will.
Finally:
Perhaps don't render in real time? Take a minute to draw the frames to a buffer then play, by the time your playing you will have another 40 or so frames etc... and maybe approx 30 secs worth is all you will need.
There is a pretty solid set of documentation within the JMonkeyEngine wiki which talks quite a bit about how to utilize the transformations you are referring to, which can be found here: Advanced Spatial Concepts.
In addition, there is quite a bit of information regarding the meshes and their rendering which you can view here: Polygon Meshes.

Where can I go to learn and understand cloth physics?

I found this (you need Java to play with it) and since have been fascinated by cloth physics. I don't understand the logic behind the code at all though...is there any essential reading or resources for beginners?
Cloth physics is actually only spring physics, where each "point" of the cloth is connected to its immediate neighbors (usually in a square grid) by a spring.
Pulling on a point then stresses the springs surrounding that point, which stretch temporarily. As they retract, they accelerate the neighboring points, which then "pull" on their surrounding springs.
Here's another demo (demonstrating their spring library).
Look to this paper for some details.
It depends on how faithfully you want or need to represent the physics. All models represent a choice of features to include and omit.
Doing it properly means knowing a lot of physics fundamentals: continuum mechanics for large displacements and strains and a good material model for fabric. I'd recommend Malvern or Fung for the former and a literature search for the latter.
paper on mass spring system
Theres something called a mass spring system, I suggest you google it and work it out. It's basically based on beams and points. You have rectangles with points which are the corners and beams which are the lines connecting them. The points can be moved and the beams stretch but the beams can only stretch a certain amount, if they stretch too much they will break allowing the points to disconnect.

Where do I start to write/use a 3D physics simulation engine?

I need to write a very simple 3D physics simulator in Java, cube and spheres bumping into each other, not much more. I've never did anything like that, where should I start? Any documentation on how it is done? any libraries I could re-use?
Physics for Game Programmers By Grant Palmer (not Java)
Phys2D (Java code)
Assuming you want to get started on how to do it, best way to start is with a Pen and Paper. Start defining focal points of your app (like the entities sphere, cube etc, rules like gravity, collision etc, decide hierarchy of objects etc..)
If you know how to do this, and want a primer on the technology side, Swing is a good option to make UIs in Java.
Also take a look here: http://www.myphysicslab.com/
NeHe's lesson 39 is a good starting point, it's in C++ but the theory is pretty easy to understand.
A nice java physics library is jmephysics (http://www.jmonkeyengine.com/jmeforum/index.php?topic=6459); it is quite easy to use and sits on top of ODE (http://www.ode.org/) and jmonkeyengine (http://www.jmonkeyengine.com) which gives you a scenegraph (http://en.wikipedia.org/wiki/Scene_graph), again something that you'll need for anything beyond a very simple 3d application.
I haven't used it for some time though, and see that they haven't released since late 2007 so not sure how active the community is now.
How about first defining a class for physical object? It has position, velocity, mass and maybe subclasses with other features like shape, elasticity etc.
Then create an universe (class) where to place these physical objects. Sounds like fun :)
If you all you need to simulate is spheres/circles and cubes then all you need is a bit of Vector math.
Eg to simulate a simple pool game each ball (sphere) would have a position, 3d linear velocity and 3d linear acceleration vector. Your simulation would involve many little frames which continually updates each and every ball. If two or more balls collide you simply sum the vectors and calculate the new velocities for all balls. If a ball hits a wall for instance all that is required is to flip the sign of the ball to have it bounce back...
If you want to do this from scratch, meaning coding your own physics engine, you'll have to know the ins and outs of the math behind to accomplish this. If you have a fairly good math background you'll have a headstart otherwise a steep learning curve is ahead.
You can start on this community forum to gather information on how things are done:
gamedev.net
Ofcourse you could use an opensource engine like Ogre if you don't want to code your own.
Check out bulletphysics. bulletphysics.com is the forum or check out the project on Sourceforge.

Categories