I implement Chebyshev walking mechanism, like this
And I've got a problem so the edges of the mech don't move like they're meant to
For now I have a GUI with some controls using Java8 Swing, it draws a mech, but movement is a problem as i said
here is my GitHub and the class with a problem method DFS_movement()
So I want this mech to move like the actual one with the constant lengths of the edges an all this stuff
Maybe you need the formulas, i.e. the equations of the position (x,y) of the end that moves (almost) along a straight line with respect to the rotation angle a (that describes the circular motion of the "first" bar)? Here the origin of the coordinate system is at the point of rotation of the first bar and the rotation angle a is the angle between the first bar and the horizontal x-axis. If that is the case, the equations are:
x = 2*A - 2*A*sqrt( (5 + cos(a))/(5 - 4*cos(a)) )*sin(a)
y = 2*A*sqrt( (5 + cos(a))/(5 - 4*cos(a)) )*(2 - cos(a))
A is the length of the first barm the one that rotates around its fixed end, attached to the origin of the coordinate system. The distance between the origin and the other fixed point of the linkage is 2A.
Related
My question can be simplified to the following: If a 3d triangle is being projected and rendered to a 2d viewing plane, how can the z value of each pixel being rendered be calculated in order to be stored to a buffer?
I currently have a working Java program that is capable of rendering 3d triangles to the 2d view as a solid color, and the camera can be moved, rotated, etc. with no problem, working exactly how one would expect it to, but if I try to render two triangles over each other, the one closer to the camera being expected to obscure the farther one, this isn't always the case. A Z buffer seems like the best idea as to how to remedy this issue, storing the z value of each pixel I render to the screen, and then if there's another pixel trying to be rendered to the same coordinate, I compare it to the z value of the current pixel when deciding which one to render. The issue I'm now facing is as follows:
How do I determine the z value of each pixel I render? I've thought about it, and there seem to be a few possibilities. One option involves finding the equation of the plane(ax + by + cz + d = 0) on which the face lies, then some sort of interpolation of each pixel in the triangle being rendered(e.g. halfway x-wise on the 2d rendered triangle -> halfway x-wise through the 3d triangle, same for the y, then solve for z using the plane's equation), though I'm not certain this would work. The other option I thought of is iterating through each point, with a given quantum, of the 3d triangle, then render each point individually, using the z of that point(which I'd also probably have to find through the plane's equation).
Again, I'm currently mainly considering using interpolation, so the pseudo-code would look like(if I have the plane's equation as "ax + by + cz + d = 0"):
xrange = (pixel.x - 2dtriangle.minX)/(2dtriangle.maxX - 2dtriangle.minX)
yrange = (pixel.y - 2dtriangle.minY)/(2dtriangle.maxY - 2dtriangle.minY)
x3d = (3dtriangle.maxX - 3dtriangle.minX) * xrange + 3dtriangle.minX
y3d = (3dtriangle.maxY - 3dtriangle.minY) * yrange + 3dtriangel.minY
z = (-d - a*x3d - b*y3d)/c
Where pixel.x is the x value of the pixel being rendered, 2dtraingle.minX and 2dtriangle.maxX are the minimum and maximum x values of the triangle being rendered(i.e. of its bounding box) after having been projected onto the 2d view, and it's min/max Y variables are the same, but for its Y. 3dtriangle.minX and 3dtriangle.maxX are the minimum and maximum x values of the 3d triangle before having been projected onto the 2d view, a, b, c, and d are the coefficients of the equation of the plane on which the 3d triangle lies, and z is the corresponding z value of the pixel being rendered.
Will that method work? If there's any ambiguity please let me know in the comments before closing the question! Thank you.
The best solution would be calculating the depth for each vertex of the triangle. Then we are able to get the depth of each pixel the same way we do for the colors when rendering a triangle with Gouraud shading. Doing that simultaneously with rendering allows to check the depth easily.
If we have a situation like this:
And we start to draw lines from the top to the bottom. We calculate the slopes from the point one to the others, and add the correct amount of depth every time we move to the next line... And so on.
You did't provide your rendering method, so can't say anything specific to it, but you should take a look at some tutorials related to Gouraud shading. Do some simple modifications to them and you should be able to use it with depth values.
Well, hopefully this helps!
i have been working with lwjgl and also j3d for the geometry part. i am still working on the collision. what i have so far witht he collision is working decently but there are 2 problems. to sum up my current way of colliding, it tests if the previous coordinate and current coordinate go through a triangle(what things are rendered as) and then it finds the point on the triangle that it just intersected that is closest to your current coordinate and makes you go there. it also makes your y coordinate go up by .001.
this workd descent but going up .001 is bad becuase if you go to a triangle that is at a 90* angle going updards you can go left to rigth but you cant back up out of it, almost as if you are stuck in it.
here is a drawing of how it works on imgur
http://i.imgur.com/1gMhRut.png
from here i want to add say .001 to the length between the current coordinate and the closest point (i already know these points) and get the new current point.
btw prev is where the person was at before they moved to the cur point and then it tests to see if those 2 points intersect a triangle and then i get the closest point to the prev if it does which is defined as closest in the picture. i can already calculate for all of those points
If I understand you correctly you want to add .001 to move away from the triangle. If that is the case then you need a vector of length 0.001 perpendicular to the triangle. In case of a triangle this is usually called the "normal". If you already have a normal for the triangle, then multiply that by .001 and add that. If you don't have a normal yet you can calculate it using cross product (you can Google the details of what a cross product is), something like this, from the vertices of the triangle:
Vector3 perpendicular = crossProduct(vertex3.pos - vertex1.pos, vertex2.pos - vertex1.pos);
Vector3 normal = perpendicular / length(normal);
Vector3 offset = normal * 0.001f;
I am writing a code where I have a world filled with various obstacles (of rectangular shapes). My robot which is a circle, originates randomly at any place inside the world. I assume that it has a range sensor on its head and want to get the distance between the nearest obstacle/boundary wall which is in its straight line of view.
I am using a random orientation between 0 and 360 degrees to orient the robot and use sin and cos of orientation to move the robot in the same orientation. But how can I get the distance between any obstacle or the boundary wall along this orientation? It should be able to tell me the distance of the first object it encounters in its vision which would be an angle from 0 to 360.
Please provide me a hint of logic how to encounter this issue?
Thanks
Assuming you know the angle, the robot's position and the position of all the obstacles, you could have a function like this:
if the angle if less than 90 or greater than 270 you increment the x coordinate by 1, otherwise you decrement by 1
you make a for loop from the current x coordinate until the edge of the world (I don't know how you have the world implemented), scanning for any obstacles at position (x, x*tan(angle)), incrementing or decrementing in accordance with the step above
the first obstacle you run across, return sqrt(x^2 + (x*tan(angle))^2) - that's just the pythagorean theorem
Here's what i think you could do.
In real game development, they uses a lot of optimization tricks, often giving approximates for better performances.
Also note that there's a lot of libraries out there for game development, that probably could get you what you want a lot simplified.
But anyway, here's what i'ld do.
identify object you'd pass through if you go straight forward.
identify the nearest one, in the list of objects you just made.
1:
A)
make a formula for your position/angle in the form y = mx + b
[y = tan(angle)x + (positionY - (tan(angle)*x))]
B)
for each object, divide the object in multiple lines segments (2 points).
check if the segment crosses the line made by the formula in point A
(if a point is smaller and the other is greater than the same X value in the formula, it's crossing)
do the same thing for your world boundaries
2: This part is more tricky (to do in programmation).
Next, you have to find the position where your robot orientation formula intersect
with all the lines you previously identified.
For each line, you must again turn the line into a y=mx+b
Let say we have:
y=3x+5 and
y=5x+1
3x+5 = 5x+1
3x-5x = 1-5
-2x = -4
x = 2
Then you replace x with 2 in either formula, you'll get the intersection point:
y = 3(2)+5 = 11
y = 5(2)+1 = 11
So these two lines intersect on point (2, 11)
Next you have to see if that point is in the domain of you're robot path formula.
Since your robot is looking at a single direction, and the formula we made in point 1.A is infinite in both directions, you must ensure the line intersection you found is not in the back of your robot (unless he moves backward...)
I guess you can make it simple, look at cos(angle) signs, then look at the position of the intersection point, if it's to the left of your robot, and cos(angle) is negative it's fine.
Finally,
Once you found ALL the intersect point, you can find the nearest one by using the Pythagorean theorem sqrt((x1-x2)^2 + (y1-y2)^2)
Also, note that it won't work for 90 and 270 angles, since tan(90) doesn't exists.
In that case, just look if both points of the segments are at both side of your robot, and the intersect point is in the right direction, it means you pass through it.
Again, there's a lot of place for optimization.
I'm trying to get the coordinates of the arcs (denoted in blue) (so that I can visually draw them [in Android path.drawArc()]) of any 2 (and eventually $n$) circles for the 'central' portion (denoted in red).
I have found this, but unfortunately I'm not at all mathematically minded!
I have coded something to find the intersection points... If that helps?
I see no path.drawArc(), only Canvas.drawArc, Path.addArc and Path.arcTo. All of these need a RectF oval argument which I would guess describes the size and position of the circle. So if you want to draw the right boundary of the lens-shaped area, this belongs to the boundary of the left circle, so you'd set oval to the bounding box of the left circle. Then you need two angles, startAngle and sweepAngle. You can get them from Math.atan2(y, x), where (x, y) is the vector which points from the center of the circle to one of the points of intersection. So knowing the points of intersection will be very useful. You'd use one to compute the start angle and the other the end angle of the first arc. The sweep angle is simply the difference between end and start. For the second arc, reverse the roles of start and end and you'll obtain an essentially closed path, although you might have to issue an explicit close call to make that closing explicit.
An alternative approach would be using Path.op(circle1, circle2, Path.Op.INTERSECT). circle1 and circle2 could be formed by Path.addCircle. This approach would delegate the whole intersection math to the Framework, but it does require a recent API (namely API level 19), so it will not work on older devices.
All of the above is untested, from reading the reference and knowing how similar frameworks behave.
I'm working on an Android game and would like to implement a 2D grid to visualize the effects of gravity on the playing field. I'd like to distort the grid based on various objects on my playing field. The effect I'm looking for is similar to the following from the Processing library:
Except that my grid will be simpler- 2D, and viewed strictly from the top, as if looking down at the playfield.
Can someone point me to an algorithm for drawing such a grid?
The one idea that I came up with was to draw the lines as if they were "particles"- start at one end of the screen and draw the line in multiple segments, treating each segment as a particle, calculating the effect of gravity at each segment's location.
The application is intended to run on Android.
Thanks
I would draw each line as a separate segment, as you mentioned. If the grid is sparse, it might be fastest.
If you are viewing the grid from above, you would need to calculate x and y coordinate displacements. The easiest way would be to actually do displacement along the z axis and then fake perspective with x_result = x/z and y_result = y/z . You set z=1 and make sure to vary it only relatively slightly (+- 0.1 for instance).
Your z should be proportional to the sum of 1/(distance to the sphere)^2. This simulates how gravity works - it tapers off with square of the distance. Great news - square of the distance means to calculate delta_x^2 + delta_y^2 - so you save yourself that square root calculation == faster.