I know do a horizontal and vertical scroller game (like Mario), in this game type, the character is always in the same distance from user. The character only moves to left and right in horizontal scroller and to down and up in vertical scroller.
But there are games in 2D that the character can move freely in the scene, like for example graphic adventures.
So how can I do that ground in order to move the character freely on the ground with depth sense?
An example can see in this video: http://youtu.be/DbZZVbF8fZY?t=4m17s
Thanks you.
This is how I would do that:
First imagine that you are looking at your scene from the top to the ground. Set your coordinate system like that. So all object on your scene will have X and Y coordinates. All your object movements and checking (when character bumps into a wall or something), calculations do in that 2D world.
Now, to draw your world, if you want simpler thing, without some isometric perspective 3D look you just to draw your background image first, and then order all your objects far to near and draw them that way. Devide your Y coords to squeeze movement area a bit. Add some constant to Y to move that area down. If you characters can jump or fly (move trough Y axe) just move Y coord to for some amount.
But if you want it to be more 3D you'll have to make some kind of perspective transformation - multiply your X coordinate with Y and some constant (start with value 1 for constant and tune it up until optimal value). You can do similar thing with Y coord too, but don't think it's needed for adventure games like this.
This is probably hard to understand without the image, but it's actually very simple transformation.
Related
How to invert Y axis? When I touch on bottom or top of the screen, the Y value is opposite I want
You can't invert an axis per se. You see, in computer graphics, the 2D coordinate system is a bit different from the canonical one taught at school in maths. The difference is that in computer graphics the y-axis is in the opposite direction, that is, from the origin to the bottom it has positive values and from the origin to the top it has negative values. Also, the origin is at the top left corner of the screen. If you can't get used to it then you can always take the opposite value to what you get, for this, asume ycoord holds the value obtained then you can do ycoord = -ycoord and that will get you the value as you're used to. Also, if you want the origin to be in the bottom left corner then you should check your y-coordinate, if it's positive then substract the vertical resolution to it, and if it's negative then add the vertical resolution to it.
But keep mind that you're going against the standard definition for coordinate systems in computer graphics.
I would say this is a duplicate questions of this one:
Move a shape to place where my finger is touched
Check on my answer there, so I won't repeat my self.
Or in short - use camera.unproject() method to get world coordinates from screen coordinates.
I've made a few 2D games using JPanels and the Graphics draw methods. For the game I'm currently working on, I'd like to be able track the player as they walk, from an aerial view (sort of like the earlier Pokemon games, where as you move the camera tracks you), but without having to hard code it (make it that when I walk, the x and y values of every other feature including the background moves the opposite direction). Is there another way to do this or will I have to hard code it?
Use Graphics.translate(x,y) where x and y positions are the positions of your camera or player. This will translate the origin of the graphics context to the point x, y.
While working on Projectiles I thought that it would be a good idea to rotate the sprite as well, to make it look nicer.
I am currently using a 1-Dimensional Array, and the sprite's width and height can and will vary, so it makes it a bit more difficult for me to figure out on how to do this correctly.
I will be honest and straight out say it: I have absolutely no idea on how to do this. There have been a few searches that I have done to try to find some stuff, and there were some things out there, but the best I found was this:
DreamInCode ~ Rotating a 1-dimensional Array of Pixels
This method works fine, but only for square Sprites. I would also like to apply this for non-square (rectangular) Sprites. How could I set it up so that rectangular sprites can be rotated?
Currently, I'm attempting to make a laser, and it would look much better if it didn't only go along a vertical or horizontal axis.
You need to recalculate the coordinate points of your image (take a look here). You've to do a matrix product of every point of your sprite (x, y) for the rotation matrix, to get the new point in the space x' and y'.
You can assume that the bottom left (or the bottom up, depends on your system coordinate orientation) of your sprite is at (x,y) = (0,0)
And you should recalculate the color too (because if you have a pure red pixel surrounded by blue pixel at (x,y)=(10,5) when you rotate it can move for example to (x, y)=(8.33, 7.1) that it's not a real pixel position because pixel haven't float coordinate. So the pixel at real position (x, y)=(8, 7) will be not anymore pure red, but a red with a small percentage of blue)... but one thing for time.
It's easier than you think: you only have to copy the original rectangular sprites centered into bigger square ones with transparent background. .png files have that option and I think you may use them.
This is a seemingly simple game mechanic that I've been trying to figure out how to do.
To try and explain I will describe a idea (problem):
Basically we say there's a vertical line that is centered in the
screen.
We have a sprite object that changes it's horizontal velocity to
dodge missiles, however in doing that the object would just drift
away.
How can I add a strong gravity force to the horizontal "center line"
of my screen so that my sprite will "fall" back into it every time it
boosts its velocity outwards?
I could post my source code but it wouldn't be too helpful to solving the question in this particular situation.
I've searched around for days trying to figure this out so any help especially with code examples would be very helpful!
I've programmed this type of thing in the past. Gravity (in physics) is an acceleration, so
1) if the sprite is to the right of the line you subtract from its horizontal velocity every 1/n seconds, and
2) if the sprite is to the left of the line you add to its horizontal velocity every 1/n seconds.
Experiment with adding/subtracting a constant, or with adding/subtracting a number that increases the farther the sprite is from the center line.
Either way you do it, that's going to create a pendulum effect. You'll also have to add a dampening factor if you don't want that. One simple approach is that if the sprite is headed away from the center line, the value you add/subtract is larger than if the sprite is heading back towards the center line. So the "gravity" that pulls the sprite to a stop is greater than the gravitational acceleration that brings the sprite back to the center line.
As you are using libgdx you should also use camera. So you don't have to calculate verything in pixels. So for example you say my screen is 16 worldunits width and 9 world units height (16/9 aspect ratio). So you can say the center of gravity is in the center of that 16, so at 8.5 if i am not wrong. Now you can say: if (player.center.x < 8.5f) { player.xSpeed += GRAVITY_HORIZONTAL } and if (player.center.x > 8.5) { player.xSpeed -= GRAVITY_HORIZONTAL }. In this case the gravity is a constant value. But as #BrettFromLA said you can also let the value grow if the distance to the center grows.
I'm experimenting with LibGDX and 3D in a projection view. Right now I'm looking at how to determine the outermost bounds of my viewport in world space at z=0.0, in order to draw coordinate grid no larger than necessary. However, I seem to have outpaced my education in that I haven't taken a formal linear algebra class and am still a little fuzzy on matrix math.
Is there a way to determine where I should start and stop drawing lines without resorting to using picking and drawing a transparent plane to intersect with?
LibGDX's unproject function takes screen coordinates in a Vector3 and returns a Vector3 in world space from the near clipping plane to the far, given the provided z. However, given that I have a translated and rotated Camera (an encapsulation of the viewprojection matrix and a slew of convenience methods), it occurs to me that I can't pick an arbitrary z to put in the window coordinate vector and just set it to 0.0 after unprojection, as that point probably won't be the furthest viewable point in the viewport. So how do I know what z value to use in the window coordinate that will give the the x and y I need in world space that's at z=0.0?
EDIT (UPDATE):
So apparently it looks like the problem I'm looking at is plane intersection, which would require ray tracing. So now I suppose my question is this: is ray tracing 4 times per render loop (or, I suppose whenever the camera has moved) worth the payoff of being able to dynamically draw a worldspace coordinate grid no larger than the viewport? If not, is there a cheaper algorithm I can use to estimate where I should start and stop drawing lines?