explanation of a wave - java

I just started experimenting with Robocode and read about waves: http://robowiki.net/wiki/Wave
what I don't understand is, why circles are used here.
i mean, when I shoot a bullet I only shoot it in one single direction and not in every direction as being implied when using a circle.
can anyone try to explain that circumstance and that concept in other words to me?
I'm just being stuck right now..
thanks,
Julian

The above answers hit the main points of why waves are a useful abstraction: as an optimization for collecting firing angles that would hit the target, relative to firing directly at the enemy.
Another use of waves is in bullet dodging movements. When you see the enemy fire a bullet (by monitoring its energy), you know the bullet's origin and speed, but not its exact location, as you can't see bullets. In this case, the wave represents what you know about the bullet: all its possible locations. If you get hit, you can similarly deduce the relative firing angle the enemy used. Later, you can use that data to evaluate the danger of different points on each wave and decide the safest place to intersect the wave (aka "Wave Surfing").

It looks like the wave approach is meant as some optimization of a naive implementation.
The basic concept would then be to determine the point in time when the projectile passes the target. This can simply be done by comparing the distance the 'wave' travelled from its origin ("wave_velocity * (time_now - time_fired)") to the distance of the target to the origin of this wave.
Once the two distances become equal (or the wave passes the target), the bearing from the wave's origin to the target's current location can be calculated and compared to the bearing of the projectile. If these two bearings are close enough to each other the target is considered 'hit'; otherwise the target was missed and the projectile can be disregarded for further calculations. (Assuming the target cannot move faster than the projectile.)
The optimization in this is that for every time-step only a couple of distances have to be calculated and compared to determine if the actual 'hit-check' needs to be performed. This way the projectiles need not be traced exactly in two (or three) dimensions but only in a single one (distance) which may save a significant amount of computation.

Related

Rectangle Avoidance in Boids

I am making an adaptation of the classic Boids simulation from the 80s in Java. It works well enough, but I am trying to add a new rule to the behavior that would force the agents to avoid rectangles (walls) and I am not sure how to go about this.
I have seen this thread :
https://gamedev.stackexchange.com/questions/45381/wall-avoidance-steering
But I am confused by the syntax used (like partsList[j] -> normal) in the final code presented and how to obtain the distance between the agent and the rectangle, as well as how to actually drive the agents away. The formula makes sense though. Could someone please explain it to me? Thank you very much!
P.S. I have been following this pseudocode and I also used this Java source code as a reference.
Edit: Okay, I see why I was confused with the syntax, but I am still in the dark when it comes to writing the wall avoidance rule.
Ah, I remember the 80s, and the boids simulation…
Generally in boids-like steering behaviors the idea is to apply a behavioral “steering force” against the velocity (momentum) of the agent. So implementing any given steering behavior comes down to finding some geometric construction that generates a vector pointing in the direction you want to turn. Ideally these will be tangential steering forces (perpendicular to the current velocity) so that steering is independent of speed control.
In the case of avoiding a wall—and a rectangle can be thought of as four walls—the general idea is to take a vector pointing away from (normal to) the wall. Using projection (dot product) you can separate out the components of that force that are parallel to and perpendicular to the velocity vector. The component of the wall normal that is perpendicular to the agent’s velocity is a steering force that will turn the agent away from the wall.
The other aspect is knowing when to use this wall avoidance behavior. A useful approach is to choose a time horizon, say 2 seconds, and decide if the agent would hit the wall within that time. Using current position, velocity, and that time value, you can do a simple linear prediction of where the agent will be in 2 seconds. If it crosses the wall during that interval then it ought to be using its wall avoidance behavior.
For more information, look up “containment” in that GDC 99 paper, and/or look at these:
http://natureofcode.com/book/chapter-6-autonomous-agents/
https://gamedevelopment.tutsplus.com/series/understanding-steering-behaviors--gamedev-12732

Step detection in real-time 1D data

For a small project we're trying to implement an autopilot for a slot car. A gyro sensor is attached to the car and delivers the Z-value (meaning the amount of centrifugal force acting on the car/sensor) 20 times per second. One crucial part of this is the detection of whether or not the car is in a curve or on a straight part and when exactly it was entered and when it left that part. Only so we can have reliable prediction of what'll happen in the future.
As for now, we're working with a sliding window to smooth the data and then have hardcoded limits (-400 for a left curve and +400 for a right curve) to detect what kind of sector (left, right, straight) we're in.
Obviously this takes too long, as it takes a couple of messages until the program detects that it's a direction change because of the smoothing and the hardcoded limits.
Here's an example of two rounds on a simple track, starting at the checkered area:
A perfect algorithm would detect the sectors S R S R S L S R S R S R S for one round, with a delay of only a couple of data points.
We thought about using the first derivative of the gyro values, but in the sample graph right after the first left curve, the following right curve (between 22:36:40 and 22:36:42) shows signs of swerving. Here the first derivative would be close to 0 and indicate a straight part...
Also, there we'd need to set a hardcoded threshold again, but with the noise of the data it could be that a small bump in the track could result in such a noise level that it's derivative would exceed the threshold.
Now we're not sure about what would be the easiest/fastest/most reliable way to handle this sort of detection. Would using a derivative be a good idea? Is there a better way?
Any input would be greatly appreciated :)
The existing software is written in Java.
In such problems, you have to trade robustness for immediacy. If you don't know what happens in the future, you can only make assumptions. And these assumptions may hold or may not.
From the looks of your data, there shouldn't be any smoothing necessary. If you define a reasonable threshold, the curves should be recognized quite reliably. If, however, this is not the case, here are some things you could try:
You already mentioned smoothing. The crucial point is how you smooth. An asymmetric smoothing kernel is probably desirable (a half triangle filter can be updated in constant time). You can directly weigh robustness and immediacy by modifying the kernel width.
A simple alternative to filtering is counting. If your data is above the curve threshold, don't call it a curve just yet. Count how many data points are above the threshold in a row. If there are more than n data points above the threshold, then you're most likely in a curve.
Using derivatives is potentially problematic. The main reason against derivatives is that a curve is not defined by any derivative at all (at least no derivative of the force). The second problem is that you can only estimate the derivatives numerically, which is quite unstable with lots of noise. So you would have to smooth your data (or find a numerical scheme for your noise model), which again requires some latency.

Modifing AStar algorithm to connect gates in logic scheme

I've been working on a logic scheme simulator for a while now. The whole thing is pretty much working but there's an issue that I can't seem to solve.
Connections in a logic scheme should be vertical and horizontal. They should avoid logic gates and have as few turns as possible (avoiding the staircase effect). Connections can also intersect but they can never overlap.
I used AStar algorithm to find the shortest and the nicest path between two logic gates. The heuristics for pathfinding is Manhattan distance while the cost between the two nodes is a dimension of a single square on the canvas.
The whole issue arises between two conditions. "Min turns" and "no overlap". I solved "min turns" issue by punishing the algorithm with double the normal cost when it tries to make a turn. That causes the algorithm to postpone all turns to the latest possible moment which causes my next picture.
My condition of no overlapping is forbidding the second input from connecting to the second free input of the AND gate (note: simulator is made for Android and the number of inputs is variable, that's why inputs are close to each other). A situation like this is bound to happen sooner or later but I would like to make it as late as possible.
I have tried to:
introdue "int turnNumber" to count how many turns have been made so far and punish paths that make too many turns. (algorithm takes too long to complete, sometimes very very long)
calculate Manhattan distance from the start to the end. Divide that number by two and then remove the "double cost" punishment from nodes whose heuristics are near that middle. (for some situations algorithm fell into infinite loop)
Are there any ideas on how to redistribute turns in the middle so as many as possible connections can be made between logic gates while still satisfying the "min turn" condition?
In case you'd like to see the code: https://gist.github.com/linaran/c8c493bb54cfca764aeb
Note: The canvas that I'm working with isn't bounded.
EDIT: method for calculating cost and heuristics are -- "calculateCost" and "manhattan"
1. You wrote that you already tried swap start/end position
but my guts tell me if you compute path from In to Out
then the turn is near Output which is in most cases OK because most Gates has single output.
2. Anyway I would change your turn cost policy a bit:
let P0,P1 be the path endpoints
let Pm=(P0+P1)/2 be the mid point
so you want the turn be as close to the mid point as it can be
so change turn cost tc to be dependent to the distance from Pm
tc(x,y)=const0+const1*|(x,y)-Pm|
that should do the trick (but I didn't test this so handle with prejudice)
it could create some weird patterns so try euclidean and manhatan distances
and chose the one with better results
3. Another approach is
fill the map from both start and end points at once
And stop when they meet
you need to distinguish between costs from start and end point
so either use negative for star and positive values for end point origin
or allocate ranges for the two (range should be larger then map size xs*ys in cells/pixels)
or add some flag value to the cost inside map cell
4. you can mix 1. and 2. together
so compute Pm
find nearest free point to Pm and let it be P2
and solve path P0->P2 and P1->P2
so the turns will be near P2 which is near Pm which is desired
[notes]
the 3th approach is the most robust one and should lead to desired results

cost / mapping function for determining center of object based on detected features

I wrote an object tracker that will try to detect and follow a moving object in a recorded video. In order to maximize the detection rate, my algorithm is using a bunch of detection & tracking algorithms (cascade, foreground & particle tracker). Each tracking algorithm will return me some point of interest that might be part of the object that I'm trying to track. Let's assume (for the simplicity of this example) that my object is a rectangle and that the three tracking algorithms returned the points 1, 2 and 3:
Based on the relation / distance of these three points it is possible to calculate the center of gravity (blue X in above image) of the tracked object. So for each frame I might be able to come up with some good estimate of the center of gravity. However, the object might move from one frame to the next:
In this example I merely rotated the original object. My algorithm will give me three new points of interest: 1',2' and 3'. I could again calculate the center of gravity based on these three new points, but I would throw away important information that I've acquired from the previous frame: based on points 1, 2 and 3 I already do know something about the relationship of these points and thus by combining the information from 1, 2 and 3 and 1',2' and 3' I should be able to come up with a better estimate of the center of gravity.
Furthermore, the next frame might yield a forth data point:
This is what I would like to do (but I don't know how):
based on the individual points (and their relationship to each other) that are returned from the different tracking algorithms, I want to build up a localization map of the tracked object. Intuitively I feel like I need to come up with A) an identification function that will identify individual points across frames and B) some cost function that will determine how similar tracked points (and the relationship / distance between them) are from frame to frame, but I can't get my head around on how to implement this. Alternatively, maybe some kind of map buildup based on the points will work. But again, I don't know how to approach this.
Any advice (and example code) is highly appreciated!
EDIT1
a simple particle filter might probably work too, but I again don't know how to define the cost function. A particle filter for tracking a certain color is easy to program: for each pixel you calculate the difference between target color and pixel color. But how would I do the same for estimating the relationship between tracked points?
EDIT2 intuitively I feel like Kalman filters could also help with the prediction step. See slides 24 - 32 of this pdf. Or am I misled?
What I think you're trying to do is essentially build up a state space of features, which can be applied to a filtering process, such as an Extended Kalman Filter. This is a useful framework when you have multiple observations in every frame, and you're trying to estimate or measure something indicated by these observations.
To determine the similarity of the tracked points, you can perform simple template matching from frame to frame for small regions around the points. One way of doing this is to extract an NxN (say, 7x7) region around point a in frame n and point a' in frame n+1, followed by normalised cross correlation between the extracted regions. This will give you a reasonable measure of how similar the patches are. If the patches are not similar, then you've probably lost track of that point.
There is an enormous literature on this and related problems starting in the 80's. Try searching for "optical flow" algorithms". The input for such algorithms is two successive frames of the same scene. The output is a vector field, one vector per pixel in the second image, which shows what the direction and speed of movement of the feature in that field. This presentation is a pretty nice summary.
A nice thing about optical flow is that many algorithms for it parallelize nicely and map onto your favorite video card GPU, so they can run in real time. Think ESPN overlays.
According to me, in order to identify who is who in each frame, you will have to use a greater dimension. For example if you want to know which point is where between two frame (considering your extracted point are same), you will have to build vectors or simplex and then deduce an organisation between your points (like angles values).
The main problem is that combinations increase with point number. If your camera is a fixed point then, you could use background as a reference in order to deduce object rotations and translations, i mean build vectors between background interest points and object points in order to clearly identify them.
hope that help go forward.
I would recommend looking in to the divided difference filter (DDF), which is similar to the extended Kalman filter (EKF), but does not require an approximate model of the dynamics of your system (which you may not have). Basically the DDF approximates the derivatives used in the EKF using a difference equation. There are plenty of papers online about this, but I do not know whether you have access to them so I have not linked them here. If you are working from a university or a company that has access to online journals (like IEEE Explore), then just Google "divided difference filter" and check out some of the papers.

Find location using only distance and bearing?

Triangulation works by checking your angle to three KNOWN targets.
"I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and it's to my right at 90 degrees." Repeat 2 more times for different targets and angles.
Trilateration works by checking your distance from three KNOWN targets.
"I know the that's the Lighthouse of Alexandria, it's located here (X,Y) on a map, and I'm 100 meters away from that." Repeat 2 more times for different targets and ranges.
But both of those methods rely on knowing WHAT you're looking at.
Say you're in a forest and you can't differentiate between trees, but you know where key trees are. These trees have been hand picked as "landmarks."
You have a robot moving through that forest slowly.
Do you know of any ways to determine location based solely off of angle and range, exploiting geometry between landmarks? Note, you will see other trees as well, so you won't know which trees are key trees. Ignore the fact that a target may be occluded. Our pre-algorithm takes care of that.
1) If this exists, what's it called? I can't find anything.
2) What do you think the odds are of having two identical location 'hits?' I imagine it's fairly rare.
3) If there are two identical location 'hits,' how can I determine my exact location after I move the robot next. (I assume the chances of having 2 occurrences of EXACT angles in a row, after I reposition the robot, would be statistically impossible, barring a forest growing in rows like corn). Would I just calculate the position again and hope for the best? Or would I somehow incorporate my previous position estimate into my next guess?
If this exists, I'd like to read about it, and if not, develop it as a side project. I just don't have time to reinvent the wheel right now, nor have the time to implement this from scratch. So if it doesn't exist, I'll have to figure out another way to localize the robot since that's not the aim of this research, if it does, lets hope it's semi-easy.
Great question.
The name of the problem you're investigating is localization, and it, together with mapping, are two of the most important and challenging problems in robotics at the moment. Put simply, localization is the problem of "given some sensor observations how do I know where I am?"
Landmark identification is one of the hidden 'tricks' that underpin so much of the practice of robotics. If it isn't possible to uniquely identify a landmark, you can end up with a high proportion of misinformation, particularly given that real sensors are stochastic (ie/ there will be some uncertainty associate with the result). Your choice of an appropriate localisation method, will almost certainly depend on how well you can uniquely identify a landmark, or associate patterns of landmarks with a map.
The simplest method of self-localization in many cases is Monte Carlo localization. One common way to implement this is by using particle filters. The advantage of this is that they cope well when you don't have great models of motion, sensor capability and need something robust that can deal with unexpected effects (like moving obstacles or landmark obscuration). A particle represents one possible state of the vehicle. Initially particles are uniformly distributed, as the vehicle moves and add more sensor observations are incorporated. Particle states are updated to move away from unlikely states - in the example given, particles would move away from areas where the range / bearings don't match what should be visible from the current position estimate. Given sufficient time and observations particles tend to clump together into areas where there is a high probability of the vehicle being located. Look up the work of Sebastian Thrun, particularly the book "probabilistic robotics".
What you're looking for is Monte Carlo localization (also known as a particle filter). Here's a good resource on the subject.
Or nearly anything from the probabilistic robotics crowd, Dellaert, Thrun, Burgard or Fox. If you're feeling ambitious, you could try to go for a full SLAM solution - a bunch of libraries are posted here.
Or if you're really really ambitious, you could implement from first principles using Factor Graphs.
I assume you want to start by turning on the robot inside the forest. I further assume that the robot can calculate the position of every tree using angle and distance.
Then you can identify the landmarks by iterating through the trees and calculating the distance to all its neighbours. In Matlab you can use pdist to get a list of all (unique) pairwise distances.
Then you can iterate through the trees to identify landmarks. For every tree, compare the distances to all its neighbours to the known distances between landmarks. Whenever you find a candidate landmark, you check its possible landmark neighbours for the correct distance signature. Since you say that you always should be able to see five landmarks at any given time, you will be trying to match 20 distances, so I'd say that the chance of false positives is not too high. If the candidate landmark and its candidate fellow landmarks do not match the complete relative distance pattern, you go check the next tree.
Once you have found all the landmarks, you simply triangulate.
Note that depending on how accurately you can measure angles and distances, you need to be able to see more landmark trees at any given time. My guess is that you need to space landmarks with sufficiently density that you can see at least three at a time if you have high measurement accuracy.
I guess you need only distance to two landmarks and the order of seeing them (i.e. from left to right you see point A and B)
(1) "Robotic mapping" and "perceptual aliasing".
(2) Two identical hits are inevitable. Since the robot can only distinguish between a finite number X of distinguishable tree configurations, even if the configurations are completely random, there is almost certainly at least one location that looks "the same" as some other location even if you encounter far fewer than X/2 different trees. Those are called "birthday paradox collisions". You may be lucky that the particular location you are at is in fact actually unique, but I wouldn't bet my robot on it.
So you:
(a) have a map of a large area with
some, but not all trees on it.
(b) a
robot somewhere in the actual forest
that, without looking at the map, has
looked at the nearby trees and
generated an internal map of a all
the trees in a tiny area and its
relative position to them
(c) To the
robot, every tree looks the same as
every other tree.
You want to find: Where is the robot on the large map?
If only each actual tree had a unique name written on it that the robot could read, and then (some of) those trees and their names were on the map, this would be trivial.
One approach is to attach a (not necessarily unique) "signature" to each tree that describes its position relative to nearby trees.
Then, as you travel along, the robot drives up to a tree and finds a "signature" for that tree, and you find all the trees on the map that "match" that signature.
If only one unique tree on the map matches, then the tree the robot is looking might be that tree on the map (yay, you know where the robot is) -- put down a weighty but tentative dot on the map at the robot's relative position to the matching tree -- the tree the robot is next to is certainly not any of the other trees on the map.
If several of the trees on the map match -- they all have the same non-unique signature -- then you could put some less-weighty tentative dots on the map at the robots position relative to each one of them.
Alas, even if find one or more matches, it is still possible that the tree the robot is looking at is not on the map at all, and the signature of that tree is coincidentally the same as one or more trees on the map, and so the robot could be anywhere on the map.
If none of the trees on the map matches, then the tree the robot is looking at is definitely not on the map. (Perhaps later on, once the robot knows exactly where it is, it should start adding these trees to the map?)
As you drive down the path, you push the dots in your estimated direction and speed of travel.
Then as you inspect other trees, possibly after driving down the path a little further, you eventually have lots of dots on the map, and hopefully one heavy, highly overlapping cluster at the actual position, and hopefully each other dot is an easily-ignored isolated coincidences.
The simplest signature is a list of distances from a particular tree to nearby trees.
A particular tree on the map is "matched" to a particular tree in the forest when, for each and every nearby tree on the map, there is a corresponding nearby tree in the forest at "the same" distance, as far as you can tell with your known distance and angular errors.
(By "nearby", I mean "close enough that the robot should be able to definitely confirm that the tree is actually there", although it's probably simpler to approximate this with something like "My robot can see all trees out to a range of R, so I'm only going to bother even trying to match trees that are within a circle of R*1/3 from my robot, and my list of distances only include trees that are within a circle of R*2/3 from the particular tree I'm trying to match").
If you know your north-south orientation even very roughly, you can create signatures that are "more unique", i.e., have fewer spurious matches on the map and (hopefully) in the real forest.
A "match" for the tree the robot is next to occurs when, for each nearby tree on the map, there is a corresponding tree in the forest at "the same" distance and direction, as far as you can tell with your known distance and angular errors.
Say you see that tree "Fred" on the map has another tree 10 meters in the N to W quadrant from it, but the robot is next to a tree that definitely doesn't have any trees at that distance in the N to W quadrant, but it has a tree 10 meters away to the South.
In that case, then (using a more complex signature) you can definitely tell the robot is not next to Fred, even though the simple signature would give a (false) match.
Another approach:
The "digital paper" solves a similar problem ... Can you plant a few trees in a pattern that is specifically designed to be easily recognized?

Categories