How would I create Horizontally centered “Gravity” ? - libGDX - - java

This is a seemingly simple game mechanic that I've been trying to figure out how to do.
To try and explain I will describe a idea (problem):
Basically we say there's a vertical line that is centered in the
screen.
We have a sprite object that changes it's horizontal velocity to
dodge missiles, however in doing that the object would just drift
away.
How can I add a strong gravity force to the horizontal "center line"
of my screen so that my sprite will "fall" back into it every time it
boosts its velocity outwards?
I could post my source code but it wouldn't be too helpful to solving the question in this particular situation.
I've searched around for days trying to figure this out so any help especially with code examples would be very helpful!

I've programmed this type of thing in the past. Gravity (in physics) is an acceleration, so
1) if the sprite is to the right of the line you subtract from its horizontal velocity every 1/n seconds, and
2) if the sprite is to the left of the line you add to its horizontal velocity every 1/n seconds.
Experiment with adding/subtracting a constant, or with adding/subtracting a number that increases the farther the sprite is from the center line.
Either way you do it, that's going to create a pendulum effect. You'll also have to add a dampening factor if you don't want that. One simple approach is that if the sprite is headed away from the center line, the value you add/subtract is larger than if the sprite is heading back towards the center line. So the "gravity" that pulls the sprite to a stop is greater than the gravitational acceleration that brings the sprite back to the center line.

As you are using libgdx you should also use camera. So you don't have to calculate verything in pixels. So for example you say my screen is 16 worldunits width and 9 world units height (16/9 aspect ratio). So you can say the center of gravity is in the center of that 16, so at 8.5 if i am not wrong. Now you can say: if (player.center.x < 8.5f) { player.xSpeed += GRAVITY_HORIZONTAL } and if (player.center.x > 8.5) { player.xSpeed -= GRAVITY_HORIZONTAL }. In this case the gravity is a constant value. But as #BrettFromLA said you can also let the value grow if the distance to the center grows.

Related

Libgdx Y axis, how to invert it?

How to invert Y axis? When I touch on bottom or top of the screen, the Y value is opposite I want
You can't invert an axis per se. You see, in computer graphics, the 2D coordinate system is a bit different from the canonical one taught at school in maths. The difference is that in computer graphics the y-axis is in the opposite direction, that is, from the origin to the bottom it has positive values and from the origin to the top it has negative values. Also, the origin is at the top left corner of the screen. If you can't get used to it then you can always take the opposite value to what you get, for this, asume ycoord holds the value obtained then you can do ycoord = -ycoord and that will get you the value as you're used to. Also, if you want the origin to be in the bottom left corner then you should check your y-coordinate, if it's positive then substract the vertical resolution to it, and if it's negative then add the vertical resolution to it.
But keep mind that you're going against the standard definition for coordinate systems in computer graphics.
I would say this is a duplicate questions of this one:
Move a shape to place where my finger is touched
Check on my answer there, so I won't repeat my self.
Or in short - use camera.unproject() method to get world coordinates from screen coordinates.

Get coordinates of displayed rectangle

I'm trying to create a game for Android device and I have a small question about the rendering of the scene. Effectively I want to draw a square of a precise size but I'm not pretty sure about the way I can get the coordinates of the border of the screen in openGL dimension. My application is set in landscape mode, so computation looks easier.
I have drawn a square with a border size of 2 and I have the impression that the square takes all the height of the screen. Since I know the resolution of my screen which is equal to 1920*1080, I can compute the width of my scene. Then, by drawing several squares I found the coordinates of on corner.
This way of computing the coordinates are a bit weird and I'm not pretty sure that the computation will always lead to a good answer. Is there a nicer way and obviously a better way to compute those coordinates ?
Thank you in advance !

How to do 2D ground with depth sense in libgdx?

I know do a horizontal and vertical scroller game (like Mario), in this game type, the character is always in the same distance from user. The character only moves to left and right in horizontal scroller and to down and up in vertical scroller.
But there are games in 2D that the character can move freely in the scene, like for example graphic adventures.
So how can I do that ground in order to move the character freely on the ground with depth sense?
An example can see in this video: http://youtu.be/DbZZVbF8fZY?t=4m17s
Thanks you.
This is how I would do that:
First imagine that you are looking at your scene from the top to the ground. Set your coordinate system like that. So all object on your scene will have X and Y coordinates. All your object movements and checking (when character bumps into a wall or something), calculations do in that 2D world.
Now, to draw your world, if you want simpler thing, without some isometric perspective 3D look you just to draw your background image first, and then order all your objects far to near and draw them that way. Devide your Y coords to squeeze movement area a bit. Add some constant to Y to move that area down. If you characters can jump or fly (move trough Y axe) just move Y coord to for some amount.
But if you want it to be more 3D you'll have to make some kind of perspective transformation - multiply your X coordinate with Y and some constant (start with value 1 for constant and tune it up until optimal value). You can do similar thing with Y coord too, but don't think it's needed for adventure games like this.
This is probably hard to understand without the image, but it's actually very simple transformation.

box2dlights set scale from box2d

I'm making a game in libGDX and I decided to use box2dlights to render the lights. I did not used cameras so much up to this point, because I already had most of the code done in pure LWJGL. There are two main operations that I need to do with the coordinates of everything.
The first is to translate the screen to the position of the map (the map is bigger than the screen, and the position of the player defines what portion of the map is visible). So for example, if the player is at (50, 30), I translate everything by (-50, -30), so that the player is in the middle.
The second thing is to multiply everything by a constant, that is the conversion from box2d meters to pixels on screen.
However, since I do not have access to box2dlights rendering, I need to pass these two information to the ray handler, and the only way to do that is via Camera. So I created an Orthographic Camera and translate it in deltaS every tick before drawing, instead of manually subtracting deltaS from every coordinate. That part works perfectly. On the other hand, the zoom thingy does not seem to work, because it zooms in and out based in the middle of the screen. For example, if I set zoom = 2, the screen is reduced twice, but it is centered on the screen. The coordinate (0,0) is not (0,0), as I would expect, but instead is screen.width/4.
Is there any way to set the camera so that it multiplies every coordinate by a number, you would assume zoom function should do OR is there any way to do it directly on box2dlights?
I don't know if my problem is very clear or common, but I can't find anything anywhere.
I finally figured it out! The problem was that I needed to set the zoom before I used
camera.setToOrtho(true, SCREEN_WIDTH, SCREEN_HEIGHT);
Because that method uses the current zoom to set its properties. Hope this helps!

Rotating an image back to it's original state

I have a program that needs to take in a photo taken by an iphone (or any kind of decent camera) of a 7x10 grid with a thick black boarder around the edges. This image can be received rotated to the right or to the left (there's no need to worry about skew). I have an image of the grid in its original state already, but I need to get the picture that I'm taking in and rotate it to its "perfect/original" state.
Idea 1: Performance Hog/Bad Results
Threshold the picture that I receive and the perfect grid Image I already have. Compare each pixel for 0 rotation, get a total score, and save it. Do do this rotating the image of increments by 1 to 359. The lowest score is the rotation we need to get the picture back to its original state.
Idea 2: Still Unsure How To Go About Doing This
Threshold the picture that I receive and the perfect grid Image I already have. Draw a a line through the center of the picture vertically and horizontally. Find the rotation based on the black pixel count that the vertical and horizontal line passed through. This would require some sort of Trigonometry that I'm not to great with understanding.
Does anyone have any other ideas for getting this working?
Any help for pointing me in the right direction would be greatly appreciated!
Thanks!
Instead of drawing one horizontal and one vertical line, draw instead two horizontal lines (say, each at a third of the picture). Only look at the left halves of these lines and calculate how many black pixels there are on the path of each (a1 and a2). You also have to keep track of the distance between the two red lines, so the number of pixels d.
Using this notation in the figure above, your desired angle is:
alpha=atan2((a2-a1),d)
and a counterclockwise rotation by alpha will bring the white portion of the picture into proper alignment.

Categories