I am developing one simple game in which i have encountered one small but important issue.
I have implemented absolute rotation in my logic.
When i start rotating an object when the object does not have any rotation , it works fine and i can rotate as in any direction without any problem as shown in the following link.
Initial Rotation Video
Now the problem arise when the object does have some rotation , and why i try to rotate in one of the direction , instead of being rotated in desire direction the rotation always starts from initial rotation as shown in the following link.
Rotation issue when shape has some rotation
I think the video shows everything , still if you have any questions please ask me.
I think the problem is , there should be a relative rotation in the direction of mouse pointer from whatever circle is selected .
Now about My Logic,
in mouse press event i just checked
Mouse Press
Whether the shape is selected on the canvas , if yes
if one of the four circles contains mouse point if yes
then initiateRotation
Mouse Drag
Using Vector Maths
I update the motion according to mouse points ,
calculate rotation angle according to the following method
Math.atan2(rotationVector.getY(), rotationVector.getX());
and apply rotation on this shape.
Rotation Vector i get from this class
Vector Rotation
I called above class startMotion in mouse press and updateMotion in mouse drag event.
What am i missing or doing anything wrong ?
We need to see some code to be able to help you out. It looks like you reset the rotation, whenever you initiateRotation, and then the object quickly rotates in place, according to your mouse position when you drag.
Related
Whats the issue?
I am taking pictures with resolution, let's say 4096x3072. Now, I have a polygon inside this picture with some co-ordinates {(x1,y1),(x2,y2),(x3,y3),(x4,y4)}. I want the focus of my camera in the center of this polygon.
Anything on internet?
I have looked over internet but most of them are for finger touch event. They are getting co-ordinates like motion.eventX() and motion.eventY() but these co-ordinates map to screen resolution not the picture resolution.
I am not 100% sure but i think this will work for you: so you have an n number of points that define the polygon. Add all the x and divide them by the number of points: X=(x1+x2+...+xn)/n. Same with the y: Y=(y1+...+yn)/n. The center of the polygon should be (X,Y). I just came up with it and it works for the example i tested it on. If it dosent work on a case and i missthoguht it tell me.
EDIT: red the question wrong, be right back with edit with the actual answer :D
I'm making a game in libGDX and I decided to use box2dlights to render the lights. I did not used cameras so much up to this point, because I already had most of the code done in pure LWJGL. There are two main operations that I need to do with the coordinates of everything.
The first is to translate the screen to the position of the map (the map is bigger than the screen, and the position of the player defines what portion of the map is visible). So for example, if the player is at (50, 30), I translate everything by (-50, -30), so that the player is in the middle.
The second thing is to multiply everything by a constant, that is the conversion from box2d meters to pixels on screen.
However, since I do not have access to box2dlights rendering, I need to pass these two information to the ray handler, and the only way to do that is via Camera. So I created an Orthographic Camera and translate it in deltaS every tick before drawing, instead of manually subtracting deltaS from every coordinate. That part works perfectly. On the other hand, the zoom thingy does not seem to work, because it zooms in and out based in the middle of the screen. For example, if I set zoom = 2, the screen is reduced twice, but it is centered on the screen. The coordinate (0,0) is not (0,0), as I would expect, but instead is screen.width/4.
Is there any way to set the camera so that it multiplies every coordinate by a number, you would assume zoom function should do OR is there any way to do it directly on box2dlights?
I don't know if my problem is very clear or common, but I can't find anything anywhere.
I finally figured it out! The problem was that I needed to set the zoom before I used
camera.setToOrtho(true, SCREEN_WIDTH, SCREEN_HEIGHT);
Because that method uses the current zoom to set its properties. Hope this helps!
Working on 2d top-down tile map game - I've been looking into generating map coordinates (x/y) that the player can use to navigate the ("infinite", perlin/other-generated terrain) map, and to fix an issue with the noise generation.
I've seen solutions and have been told to translate world coords to screen coords and vice versa - but I'm unable to figure out how you can properly define fixed map coordinates because rendering something on the screen is done with coordinates relative to the frame/pane... so 0,0 is always the top-left of the window, no matter where the map is.
I could easily calculate coordinates for the map (there are a tile columns and b tile rows between your character's location and the map center ) but I can't understand how to define a fixed-on-map starting location? The player may start the game in different locations once I enable a save feature so it can't be based off of that.
When doing something like this, all coordinates should be in "world coordinates". So your player location is where it's on the infinite map, not where he is on the screen.
To render everything, you need a method that takes world coordinates and transforms them into screen coordinates and vice versa.
To start rendering, use the screen coordinate 0,0 -> transform to world coordinate to get the tile to render in the top left corner. Same with clicking something: Take mouse coordinates, convert, and then look on the world map what's there.
When moving the player, update it's world coordinate, etc.
I have a fps "camera", and just recently managed to set up mouse movement to rotate the angle of viewing. The one problem with the camera is that the mouse can leave the window and the angles will not rotate anymore. I know I can use a robot method like mouseMove(), however, I've heard that it makes the camera rotation feel very jerky. Is there any other way to keep the mouse in the window, say like, Minecraft? I'm using Minecraft as an example because my program uses LWJGL too, and I was wondering how Notch does it. Any suggestions?
Mouse.setGrabbed(true) at a start-up moment,
and for every game-loop(frame):
catch mouse movement with Y_Angle += Mouse.getDX()*0.1f,
then rotate your view matrix around Y axis on Y_Angle degrees/radians.
For rotation around X and Z axes use Mouse.getDY() and think on your own how to implement right matrix rotation for those, but this is the main idea.
I have tried to use different methods but to no avail. So I come looking for a fresh point of view.
I want to have a sprite follow the touch coordinates of the user. So if they dragged their finger across the screen the sprite would move or tween towards the touch coordinates but continuously update to 'follow' the users finger. If the user stopped moving their finger, the sprite would eventually come to a stop at the final touch coordinates.
I can get the sprite to move and tween to a single set of coordinates but not continuously update to follow along. This is for an android game. its in java. this is one of the basic functions i need to use so im not too far along. that being said I can accept direction for open gl or for canvas.
Any help or direction would be much appreciated. thank you!
On every frame, do
Pos = (Pos * 0.95) + (LastTouch * 0.05);
Adjust the ratios until it feels right. Just make sure they add up to 1.