Scale dimensions in a physics simulation - java

I need some help here with a physics simulation in java I'm writing. The simulation is about the free fall of a body. I'm using java and I don't use any third-party library.
I have an applet (1400px wide, 700px high) and a sprite (which is oval) falling down. The gravity is set to 10 m/s². I apply second Newton's Law to my oval sprite, and I use RK4 algorithm to compute the x an y coordinates of my sprite over time.
This all works fine...Except that I don't know how to scale the dimensions I use in my simulation.
For example, I would like 1px to represent 1cm (both width and height). So that my 1400px*700px applet dimensions will represent 14m*7m in real. I used
Graphics2D.scale()
method but it does't seem to work. I also thought to change the gravity but this seems not appropriate for me...
Could someone tell me a proper way to scale my dimensions?

You have a 1400 x 700 pixel applet drawing area.
You have a 14 x 7 meter physics area.
In order to scale from meters to pixels, you have to use a scaling factor.
1400 pixels / 14 meters = 100 pixels per meter.
700 pixels / 7 meters = 100 pixels per meter.
So far so good. If you had two different scaling factors, your drawing area would be distorted.
Let's assume that the oval started at (0, 0).
So we calculate the first position of the oval. Let's assume the first calculated position is
x = 2.45
y = 3.83
So, using the scaling factor we came up with:
pixel x = 2.45 meters x 100 pixels per meter = 245 pixels.
pixel y = 3.83 meters x 100 pixels per meter = 383 pixels.
Our physics area has increasing x to the right and increasing y down. Fortunately, our drawing area has increasing x to the right and increasing y down.
So, we don't have to worry about changing signs.
Draw the oval at (245, 383).
Calculate next x, y position and repeat.

Related

Relation between x and y axes and pixels in Java

I am making the game Breakout in Java and I have one slight problem:
if(ballLocation.y >= 430){ //baseline
if((ballLocation.x >= batLocation.x) && (ballLocation.x <= (batLocation.x + width))){ //determining whether or not the ball is touching the bat by establishing a range of values
directionY = -2; //changing the direction of the ball so that it appears to be bouncing off when it hits the bat
}
}
If I hit the ball on the bat further to the left the if statement is true, however for some reason if it hits further to the right it is false even though I can see that the ball is hitting the bat. So I think that the only problem can be with adding the width variable to the batLocation.x, because batLocation.x is in terms of units on the x axis, and the width is the width of the image (as the bat and ball are both png files and not just drawn by the game) in terms of pixels (which is 10).
So I am thinking that the most likely reason why my code is not working is because I am making a false assumption and assuming that if I add the width in pixels to the starting x point that I will get the length of the bat when perhaps 1 pixel is not equal to 1 unit on the x axis.
So is this the case? Is 1 pixel not equal to one unit in terms of the x or y axes? And if it is not, then how many units in terms of x is my bat's width if it is 10 pixels in width?
I have got the latest version of JDK 8.
There isn't really a built-in x and y axis, except when you draw, and that's always in pixels. What units have you made x and y in? When you draw, do you use those values as the screen location? If so, try RealSkeptic's suggestion; otherwise, the units are however you defined them.

How meters to pixel and scaling works in Box2d?

This is my first time using Box2d with Libgdx and I'm confused because now I'm dealing with meters and not pixels, for example if I wan't to set the size of my shape to 80x80 pixels, how do I do that in meters? Obviously I would have to zoom the camera which I have no idea how to do it. I wan't to know how can you set the amount of pixels needed to have one meter.

libgdx sprite dimension meters or pixels?

I have been reading "Learning Libgdx Game development". I tried the below snippet:
// First the camera object is created with viewport of 5 X 5.
OrthographicCamera camera = new OrthographicCamera(5, 5);
I have a texture having a dimension of 32 pixels by 32 pixels. I form a sprite out of this
Sprite spr = new Sprite(texture);
// I set the size of Spr as
spr.setSize(1,1);
According to the book the dimensions above are meters and not pixels.
What I don't understand is how is mapping from meters to pixels happening on the screen? When I draw the sprite on the screen the size is not even half a meter let alone 1.
Also, the size of the underlying texture is 32 X 32 pixels. WHen I resize, the size of my sprites also changes.
Then, what would be the dimensions of spr.setPosition(x, y)? Will they be meters or pixels?
The library uses pixels for dimensions like texture size, and meters for in-game units.
setPosition will move an object in game units. When you move an object X game units, the number of pixels changes based on the camera's projection matrix amongst other settings.
If you think about it, it wouldn't make sense to move in pixels. If camera A is zoomed in more than cameraB moving X pixels in the view of each camera would require moving two different amounts.
Edit: Sorry, I made some assumptions in your understanding above, partially misunderstood the question, and frankly used the misleading wording. The key is that the convention of meters for units is not built-in, it's one that you enforce because the ratio of one pixel to one meter in Box2D wouldn't make sense. The wording I used implied that internally setPosition cares about meters, but you should be doing the scaling yourself. Often times the ratio I see in libgdx is 30 pixels = 1 meter.

Finding the footprint of an isometric entity

I'm working on making a 2D isometric engine in Java because I like suffering, I guess. Anyways, I'm getting into collision detection and I've hit a bit of a problem.
Characters in-game are not restricted to movement from tile to tile - they move freely. My problem is that I'm not sure how to stop a player from colliding with, say, a crate, without denying them access to the tile.
For instance, say the crate was on .5 of a tile, and then the rest of the crate was off the tile, I'd like the player to be able to move on to the free .5 of the tile instead of the entire tile becoming blocked.
The problem I've hit is that I'm not sure how to approximate the size of the footprint of the object. Using the image's dimensions don't work very well, since the object's "height" in gamespace translates to additional floorspace being taken up by the image.
How should I estimate an object's size? Mind, I don't need pixel-perfect detection. A rhombus would work fine.
I'm happy to provide any code you might need, but this seems like a math issue.
From the bounding rectangle of the sprite, you can infer the height of a rhombus that fits inside, but you cannot precisely determine the two dimensions on the floor, as each dimension contributes equally to width and height of the sprite. However, if you assume that the base of the rhombus square then you can determine the length of its side as well.
If the sprite is W pixels wide and H pixels high, the square base of the rhombus has a side of W / sqrt(3) and the height of the rhombus will be H - (W / sqrt(3)). This image of some shapes in isometric projection can be helpful to understand why these formulas work.

3D to 2D projection

I'm trying to write a game in Java (Processing, actually) with some cool 3D effects.
I have a choice of two 3D renderers, but neither have the quality or flexibility of the default renderer. I was thinking that if I could get a function to proje
So say I have a set of coordinates (x, y, z) floating in 3D space. How would I get where on the 2D screen that point should be drawn (perspective)?
Just to clarify, I need only the bare minimum (not taking the position of the camera into account, I can get that effect just by offsetting the points) - I'm not re-writing OpenGL.
And yes, I see that there are many other questions on this - but none of them seem to really have a definitive answer.
Here is your bare minimum.
column = X*focal/Z + width/2
row = -Y*focal/Z + height/2
The notation is:
X, Y, Z are 3D coordinates in distance units such as mm or meters;
focal is a focal length of the camera in pixels (focal= 500 for VGA resolution is a reasonable choice since it will generate a field of view about 60 deg; if you have a larger image size scale your focal length proportionally); note that physically focal~1cm << Z, which simplifies formulas presented in the previous answer.
height and width are the dimensions of the image or sensor in pixels;
row, column - are image pixels coordinates (note: row starts on top and goes down, while Y goes up). This is a standard set of coordinate systems centered on camera center (for X, Y, Z) and on the upper-left image corner (for row, column, see green lines).
You don't need to use OpenGL indeed since these formulas are easy to implement. But there will be some side-effects such as whenever your object has several surfaces they won't display correctly since you have no way to simulate occlusions. Thus you can add a depth buffer which is a simple 2D array with float Z values that keeps track which pixels are closer and which are further; if there is an attempt to write more than once at the same projected location, the closer pixel always wins. Good luck.
Look into Pin hole camera model
http://en.wikipedia.org/wiki/Pinhole_camera_model
ProjectedX = WorldX * D / ( D + worldZ )
ProjectedY = WorldY * D / ( D + worldZ )
where D is the distance between the projection plane and eye

Categories