Java LWJGL Rendering pistol model at right side of screen - java

Hi everyone I'm working on weapon rendering and I got stuck in part when I have to calculate gun.y and gun.rot.x. Rotation on y axis and calculating x,z of gun works good. But now is question how I can get gun.rot.x and gun.y. My calculation of gun x and y looks like:
float offsetX = (float) Math.sin(Math.toRadians(camera.getRotation().y + 25));
float offsetZ = (float) Math.cos(Math.toRadians(camera.getRotation().y + 25));
gun.x = camera.x + offsetX;
gun.z = camera.z - offsetZ;
Y rotation of gun is really simple:
gun.getRotation().y = 360 - camera.getRotation().y;
I tried to calculate gun.y with code like this:
float offsetY = (float) Math.sin(Math.toRadians(camera.getTransform().getRotation().x + 25));
gun.y = camera.y - offsetY
But it seems to not work correctly.

What you are trying to do is rendering a viewmodel (the gaming term). this i generally done by having another camera parented by the player's camera. In other words, you would use a separate camera to render the model so that it looks close to the face (and also to avoid the gun going through walls for example) and then depending on your implementation add your viemodel camera's transformation to the main camera's transformation.
If you are using the fixed function pipeline (glMatrixMode(), glTranslatef() and so on) all you have to do is apply the transformation (call glTranslatef() and glRotatef()) without reseting the identity matrix (glLoadIdentity()). For example:
{ /* your rendering code */
glLoadIdentity();
camera.applyTransformation();
render_scene();
glPushMatrix(); // ensures that the transformation isent
// directly applied to the matrix should
// you want to render more thing with the
// main camera after the view model
viewmodelCamera.applyTransformation();
render_viewmodel();
glPopMatrix(); // complementary call for glPushMatrix()
// marks the end of this matrix operation
// on the matrix stack
possibly_render_more_things() // if you wish
} /* end of rendering code */
If you are using matrices for your camera (which you should if you intend to properly use modern OpenGL), all you have to do is add the two MVP (Model View Projection) matrices, your base camera's and your modelview camra's, and pass the result of these to your shader for the gun rendering.
Hope this helped!
EDIT: Just thought I'd mention, the second camera is basically like working in model space for your gun model, you would use the units that you are currently storing in gun.x (for ex) (also don't use public variables), and make those the camera's transformation.

Related

LibGDX: What's the fastest way to render scalable-vector-based shapes?

In my game (created using LibGDX) I have a gameworld filled with a lot of circles changing their size continiously. Because there are so many circles I want to maximize their rendering-performance: I've heard of the ShapeRenderer, but it seems like that it is not the best in case of performance. The PixMap is also no solution because my circles should be vector-based.
Is their another faster solution too? And is the ShapeRenderer really that slow?
PS: I'm already using chunks to reduce the render time.
For the ShapeRenderer (circle in particular), if we look at the method, radius does not effect performance, segments is where the work is. And this is most likely what is hurting you, as you scale up in size, you increase the segments for detail.
I am not sure about there being opengl native vector graphics either... I think ultimately to reach the graphics card, you need to eventually become vertices and polygons (if you are filling). So actually, I think the Pixmap solution is the one you might be looking for. You compute the segments and the polygons to draw once (at the highest resolution you need).
With the Pixmap you should be able to do this in a way which is as performant as any other rendering of a Texture which you change sizes using the scaling variables (which should be as performant as not changing the scale). As you can see from the circle draw method that the ShapeRenderer uses, the circle is still really just describing a polygon (you are just computing its geometry every time).
If you want to give the Pixmap option a go, here is some code to get you bootstrapped.
Here is a kotlin function for building a PolygonSprite. You will have to do the maths for plotting the vertices of your circle, but you can probably use the circle draw method to get an idea for that. If you compute your geometry for a radius of 1, then you can just use your x/y scale to set the radius at whatever size you want.
fun polygonSprite(points: Array<Vector2>): PolygonSprite {
val pix = Pixmap(1, 1, Pixmap.Format.RGBA8888)
pix.setColor(0xFFFFFFFF.toInt())
pix.fill()
val textureSolid = Texture(pix)
val vertices = FloatArray(points.size * 2)
val triangleIndices = triangulator.computeTriangles(vertices)
for (i in 0..points.size - 1) {
val point = points.get(i)
val offset = i * 2
vertices[offset] = point.x
vertices[offset + 1] = point.y
}
val polyReg = PolygonRegion(TextureRegion(textureSolid),
vertices, triangleIndices.toArray())
val poly = PolygonSprite(polyReg)
return poly
}
And here is some rendering code. It takes into account relative positioning of the shape from the parent Body and some other stuff:
fun render(camera: OrthographicCamera) {
val parentRotation = (me().physicsRoot.rotationR() * MathUtils.radDeg)
val parentTransform = me().physicsRoot.transform
val myPosition = vec2(offsetX, offsetY)
parentTransform.mul(myPosition)
poly.color = color.get()
poly.setOrigin(0f, 0f)
poly.setPosition(myPosition.x, myPosition.y)
poly.rotation = parentRotation + rotationD
poly.setScale(scaleX, scaleY)
poly.draw(JJ.B.renderWorld.polyBatch)
recycle(myPosition)
}
Also, don't make a new one of these for every one, try and reuse them.
PS: Another option is to make a circle shader :D

How to make LibGDX Actions moveTo() animate from one point to another in a curved line?

I am working on a project in LibGDX, and I am using Scene2D actors for some of my sprites. In this regard, I have a sprite, which is spawning somewhere on the screen and needs to move to another position on the screen. To do this I am using the moveTo(xPos, yPos, duration, interpolation) method in the Actions, to make the move animation.
However, when I use this approach, the actor moves like I told it to, but it only moves in a straight line, from point A to B. I have tried several Interpolation options, like Circle interpolation and such, but it seems only to impact the speed of the animation line.
So now my question: How do I make my animation make a smooth curved line (See picture), from A to B?
I am currently using this code to make the Actions animation:
adultCustomerPointActor.addAction(Actions.sequence(
Actions.moveTo(300, 200, 2f, Interpolation.circle)
));
Thanks in advance for your help :)
It's a geometry problem. Using vectors, find the point halfway between the two points:
vec1.set(bx, by).sub(ax, ay).scl(0.5f).add(ax, ay);
Get another vector that is 90 or 270 to from the vector between the points:
vec2.set(bx, by).sub(ax, ay).rotate90().add(vec1);
This vec2 can be scaled to adjust how extreme curvature of the arc is. If you leave it alone, you'll have a quarter circle. You can also scale it negative to reverse the curvature.
Then add the second vector to the first to find the center point of your arc, which we can call point C.
vec1.set(bx, by).sub(vec2); // CB
vec3.set(ax, ay).sub(vec2); // CA
float angle = vec1.angle(vec3);
Now you need a vector that points from point C to point A. You will rotate this vector until it reaches point B. So you need the angle between CA and CB.
So here's a very simplistic class that implements this. It doesn't account yet for deciding if you want the arc to go up or down and if you want to scale how extreme it looks. You could add those as additional parameters with getters/setters. I haven't tested it, so it may need some debugging.
public class ArcToAction extends MoveToAction {
private float angle;
private final Vector2 vec1 = new Vector2(), vec2 = new Vector2(), vec3 = new Vector2();
#Override
protected void begin () {
super.begin();
float ax = target.getX(getAlignment()); // have to recalculate these because private in parent
float ay = target.getY(getAlignment());
vec1.set(getX(), getY()).sub(ax, ay);
vec2.set(vec1).rotate90();
vec1.scl(0.5f).add(ax, ay);
vec2.add(vec1);
vec1.set(bx, by).sub(vec2); // CB
vec3.set(ax, ay).sub(vec2); // CA
angle = vec1.angle(vec3);
}
protected void update (float percent) {
if (percent >= 1){
target.setPosition(getX(), getY(), getAlignment());
return;
}
vec1.set(vec3).rotate(percent * angle);
target.setPosition(vec1.x, vec1.y, getAlignment());
}
}
If you want to support automatic pooling, you can add a method like this:
static public ArcToAction arcTo (float x, float y, float duration, Interpolation interpolation) {
ArcToAction action = Actions.action(ArcToAction .class);
action.setPosition(x, y);
action.setDuration(duration);
action.setInterpolation(interpolation);
return action;
}

OpenGL Shader - Rotating a model around its origin (2D World)

So I created a vertex shader that takes in an angle and calculates the rotation. There is a problem though that the model rotates around the world center and not its own axis/origin.
Side note: This is 2D rotation.
How do I make the model rotate through its own axis?
Here is my current vertex shader:
#version 150 core
in vec4 in_Position;
in vec4 in_Color;
in vec2 in_TextureCoord;
out vec4 pass_Color;
out vec2 pass_TextureCoord;
void main(void) {
gl_Position = in_Position;
pass_Color = in_Color;
pass_TextureCoord = in_TextureCoord;
}
Rotating CPU side:
Vector3f center = new Vector3f(phyxBody.getPosition().x,phyxBody.getPosition().y,0);
Matrix4f pos = new Matrix4f();
pos.m00 = (phyxBody.getPosition().x)-(getWidth()/30f/2f);
pos.m01 = (phyxBody.getPosition().y)+(getHeight()/30f/2f);
pos.m10 = (phyxBody.getPosition().x)-(getWidth()/30f/2f);
pos.m11 = (phyxBody.getPosition().y)-(getHeight()/30f/2f);
pos.m20 = (phyxBody.getPosition().x)+(getWidth()/30f/2f);
pos.m21 = (phyxBody.getPosition().y)-(getHeight()/30f/2f);
pos.m30 = (phyxBody.getPosition().x)+(getWidth()/30f/2f);
pos.m31 = (phyxBody.getPosition().y)+(getHeight()/30f/2f);
pos.rotate(phyxBody.getAngle(),center);
Result is a weird rotated stretch of the object.. Do you know why? Don't worry about the /30f part.
phyxBody is an instance of the class Body from the JBox2D library.
phyxBody.getAngle() is in raidians.
Matrix4f is a class from the LWJGL library.
EDIT:
Vector3f center = new Vector3f(0,0,0);
Matrix4f pos = new Matrix4f();
pos.m00 = -(getWidth()/30f/2f);
pos.m01 = +(getHeight()/30f/2f);
pos.m10 = -(getWidth()/30f/2f);
pos.m11 = -(getHeight()/30f/2f);
pos.m20 = +(getWidth()/30f/2f);
pos.m21 = -(getHeight()/30f/2f);
pos.m30 = +(getWidth()/30f/2f);
pos.m31 = +(getHeight()/30f/2f);
pos.rotate(phyxBody.getAngle(),center);
pos.m00 += phyxBody.getPosition().x;
pos.m01 += phyxBody.getPosition().y;
pos.m10 += phyxBody.getPosition().x;
pos.m11 += phyxBody.getPosition().y;
pos.m20 += phyxBody.getPosition().x;
pos.m21 += phyxBody.getPosition().y;
pos.m30 += phyxBody.getPosition().x;
pos.m31 += phyxBody.getPosition().y;
This is currently the transformation code, yet the rotation still doesn't work correctly.
My try at the rotate method: (What am I doing wrong?)
if (phyxBody.getAngle() != 0.0) {
pos.m00 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
pos.m01 *= Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m10 *= -Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m11 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
pos.m20 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
pos.m21 *= Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m30 *= -Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m31 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
}
The order is scaling * rotation * translation - see this question. I'm guessing you've already translated your coordinates outside of your shader. You'll have to rotate first, then translate. It's good to know the linear algebra behind what you're doing so you know why things work or don't work.
The typical way to do this is to pass a pre-computed ModelView matrix that has already taken care of scaling/rotation/translation. If you've already translated your vertices, you can't fix the problem in your shader without needlessly undoing it and then redoing it after. Send in your vertices untranslated and accompany them with data, like your angle, to translate them. Or you can translate and rotate both beforehand. It depends on what you want to do.
Bottom line: You must rotate before you translate.
Here is the typical way you do vertex transformations:
OpenGL side:
Calculate ModelView matrix: Scale * Rotation * Translation
Pass to shader as a uniform matrix
GLSL side:
Multiply vertices by ModelView matrix in vertex shader
Send to gl_Position
Response to Edit:
I'm inclined to think your implementation needs to be completely redone. You have points that belong to a model. These points are all oriented around the origin. For example, if you had a car, the points would form a mesh of triangles.
If you simply do not translate these points and then rotate them, the car will rotate around its center. If you translate afterwards, the car will translate in its rotated fashion to the place you've specified. The key here is that the origin of your model lines up with the origin of rotation so you end up rotating the model "around itself."
If you instead translate to the new position and then rotate, your model will rotate as if it were orbiting the origin. This is probably not what you want.
If you're modifying the actual vertex positions directly instead of using transformation matrices, you're doing it wrong. Even if you just have a square, leave the coordinates at (-1,-1) (-1,1) (1,1) (1,-1) (notice how the center is at (0,0)) and translate them to where you want to be.
You don't have to re-implement math functionality and probably shouldn't (unless your goal is explicitly to do so). GLM is a popular math library that does everything you want and it's tailored specifically for OpenGL.
Final Edit
Here is a beautiful work of art I drew for you demonstrating what you need to do.
Notice how in the bottom right the model has been swept out around the world origin about 45 degrees. If we went another 45, it would have its bottom edge parallel to the X-axis and intersecting the positive Y-axis with the blue vertex in the bottom left and purple vertex in the bottom right.
You should probably review how to work with vertices, matrices, and shaders. Vertices should be specified once, matrices should be updated every time you chance the scale, rotation, or position of the object, and shaders should multiply the each vertex in the model by a uniform (constant).
Your sprite lacks sufficient information to be able to do what you're trying to do. In order to compute a rotation about a point, you need to know what that point is. And you don't.
So if you want to rotate about an arbitrary location, you will need to pass that location to your shader. Once there, you subtract it from your positions, rotate the position, and add it back in. However, that would require a lot of work, which is why you should just compute a matrix on the CPU to do all of that. Your shader would be given this matrix and perform the transform itself.
Of course, that itself requires something else, because you keep updating the position of these objects by offsetting the vertices on the CPU. This is not good; you should be keeping these objects relative to their origin in the buffer. You should then transform them to their world-position as part of their matrix.
So your shader should be taking object-relative coordinates, and it should be passed a matrix that does a rotation followed by a translation to their world-space position. Actually, scratch that; the matrix should transform to their final camera-space position (world-space is always a bad idea).

How to have a "Camera" only show a portion of a loaded area

I'm having a little problem with figuring something out (Obviously).
I'm creating a 2D Top-down mmorpg, and in this game I wish the player to move around a tiled map similar to the way the game Pokemon worked, if anyone has ever played it.
If you have not, picture this: I need to load various areas, constructing them from tiles which contain an image and a location (x, y) and objects (players, items) but the player can only see a portion of it at a time, namely a 20 by 15 tile-wide area, which can be 100s of tiles tall/wide. I want the "camera" to follow the player, keeping him in the center, unless the player reaches the edge of the loaded area.
I don't need code necessarily, just a design plan. I have no idea how to go about this kind of thing.
I was thinking of possibly splitting up the entire loaded area into 10x10 tile pieces, called "Blocks" and loading them, but I'm still not sure how to load pieces off screen and only show them when the player is in range.
The picture should describe it:
Any ideas?
My solution:
The way I solved this problem was through the wonderful world of JScrollPanes and JPanels.
I added a 3x3 block of JPanels inside of a JScrollPane, added a couple scrolling and "goto" methods for centering/moving the JScrollPane around, and voila, I had my camera.
While the answer I chose was a little more generic to people wanting to do 2d camera stuff, the way I did it actually helped me visualize what I was doing a little better since I actually had a physical "Camera" (JScrollPane) to move around my "World" (3x3 Grid of JPanels)
Just thought I would post this here in case anyone was googling for an answer and this came up. :)
For a 2D game, it's quite easy to figure out which tiles fall within a view rectangle, if the tiles are rectangular. Basically, picture a "viewport" rectangle inside the larger world rectangle. By dividing the view offsets by the tile sizes you can easily determine the starting tile, and then just render the tiles in that fit inside the view.
First off, you're working in three coordinate systems: view, world, and map. The view coordinates are essentially mouse offsets from the upper left corner of the view. World coordinates are pixels distances from the upper left corner of tile 0, 0. I'm assuming your world starts in the upper left corner. And map cooridnates are x, y indices into the map array.
You'll need to convert between these in order to do "fancy" things like scrolling, figuring out which tile is under the mouse, and drawing world objects at the correct coordinates in the view. So, you'll need some functions to convert between these systems:
// I haven't touched Java in years, but JavaScript should be easy enough to convey the point
var TileWidth = 40,
TileHeight = 40;
function View() {
this.viewOrigin = [0, 0]; // scroll offset
this.viewSize = [600, 400];
this.map = null;
this.worldSize = [0, 0];
}
View.prototype.viewToWorld = function(v, w) {
w[0] = v[0] + this.viewOrigin[0];
w[1] = v[1] + this.viewOrigin[1];
};
View.prototype.worldToMap = function(w, m) {
m[0] = Math.floor(w[0] / TileWidth);
m[1] = Math.floor(w[1] / TileHeight);
}
View.prototype.mapToWorld = function(m, w) {
w[0] = m[0] * TileWidth;
w[1] = m[1] * TileHeight;
};
View.prototype.worldToView = function(w, v) {
v[0] = w[0] - this.viewOrigin[0];
v[1] = w[1] - this.viewOrigin[1];
}
Armed with these functions we can now render the visible portion of the map...
View.prototype.draw = function() {
var mapStartPos = [0, 0],
worldStartPos = [0, 0],
viewStartPos = [0, 0];
mx, my, // map coordinates of current tile
vx, vy; // view coordinates of current tile
this.worldToMap(this.viewOrigin, mapStartPos); // which tile is closest to the view origin?
this.mapToWorld(mapStartPos, worldStartPos); // round world position to tile corner...
this.worldToView(worldStartPos, viewStartPos); // ... and then convert to view coordinates. this allows per-pixel scrolling
mx = mapStartPos[0];
my = mapStartPos[y];
for (vy = viewStartPos[1]; vy < this.viewSize[1]; vy += TileHeight) {
for (vx = viewStartPos[0]; vx < this.viewSize[0]; vy += TileWidth) {
var tile = this.map.get(mx++, my);
this.drawTile(tile, vx, vy);
}
mx = mapStartPos[0];
my++;
vy += TileHeight;
}
};
That should work. I didn't have time to put together a working demo webpage, but I hope you get the idea.
By changing viewOrigin you can scroll around. To get the world, and map coordinates under the mouse, use the viewToWorld and worldToMap functions.
If you're planning on an isometric view i.e. Diablo, then things get considerably trickier.
Good luck!
The way I would do such a thing is to keep a variable called cameraPosition or something. Then, in the draw method of all objects, use cameraPosition to offset the locations of everything.
For example: A rock is at [100,50], while the camera is at [75,75]. This means the rock should be drawn at [25,-25] (the result of [100,50] - [75,75]).
You might have to tweak this a bit to make it work (for example maybe you have to compensate for window size). Note that you should also do a bit of culling - if something wants to be drawn at [2460,-830], you probably don't want to bother drawing it.
One approach is along the lines of double buffering ( Java Double Buffering ) and blitting ( http://download.oracle.com/javase/tutorial/extra/fullscreen/doublebuf.html ). There is even a design pattern associated with it ( http://www.javalobby.org/forums/thread.jspa?threadID=16867&tstart=0 ).

Changing the Coordinate System in LibGDX (Java)

LibGDX has a coordinate system where (0,0) is at the bottom-left. (like this image: http://i.stack.imgur.com/jVrJ0.png)
This has me beating my head against a wall, mainly because I'm porting a game I had already made with the usual coordinate system (where 0,0 is in the Top Left Corner).
My question: Is there any simple way of changing this coordinate system?
If you use a Camera (which you should) changing the coordinate system is pretty simple:
camera= new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.setToOrtho(true, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
If you use TextureRegions and/or a TextureAtlas, all you need to do in addition to that is call region.flip(false, true).
The reasons we use y-up by default (which you can easily change as illustrated above) are as follows:
your simulation code will most likely use a standard euclidian coordinate system with y-up
if you go 3D you have y-up
The default coordinate system is a right handed one in OpenGL, with y-up. You can of course easily change that with some matrix magic.
The only two places in libgdx where we use y-down are:
Pixmap coordinates (top upper left origin, y-down)
Touch event coordinates which are given in window coordinates (top upper left origin, y-down)
Again, you can easily change the used coordinate system to whatever you want using either Camera or a tiny bit of matrix math.
Just to expand a little on what badlogic said above, if you are using a TextureAtlas (with TextureRegions) you need to flip them, as badlogic said, in addition to the camera work. If you are using a TextureAtlas, you can use this code right after loading your atlas:
String textureFile = "data/textures.txt";
atlas = new TextureAtlas(Gdx.files.internal(textureFile), Gdx.files.internal("data"));
// Let's flip all the regions. Required for y=0 is TOP
Array<AtlasRegion> tr = atlas.getRegions();
for (int i = 0; i < tr.size; i++) {
TextureRegion t = tr.get(i);
t.flip(false, true);
}
If you want to hide the transformation and not think about it after setting it up once, you can make a class that inherits all of the functionalities you need, but first transforms the coordinates before passing it to its parent class's function. Unfortunately, this would take a lot of time.
You could alternatively make a method that does the simple y' = height - y transformation on the whole Coordinate object (or whatever it is you're using), and call it once before each operation.
Interesting graphics library, I would say. I found this assessment from the link below:
Another issue was that different coordinate systems were used in different parts of Libgdx. Sometimes the origin of the axes was in the
bottom left corner with the y-axis pointing upwards and sometimes in
the top left corner of the sprite pointing downwards. When drawing
Meshes the origin was even in the center of the screen. This caused
quite a bit of confusion and extra work to get everything in the
correct place on the screen.
http://www.csc.kth.se/utbildning/kandidatexjobb/datateknik/2011/rapport/ahmed_rakiv_OCH_aule_jonas_K11072.pdf
I just made a class that extends SpriteBatch that overides certain methods adding y = Gdx.graphics.getHeight() - y - height. Simple but effective.
I was able to get textures and fonts rendering correctly using the suggested flipped coordinate system via OrthographicCamera. Here's what I did:
private SpriteBatch batch;
private BitmapFont font;
private OrthographicCamera cam;
private Texture tex;
#Override
public void create () {
batch = new SpriteBatch();
font = new BitmapFont(true);
font.setColor(Color.WHITE);
cam = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
cam.setToOrtho(true, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
tex = new Texture("badlogic.jpg");
}
#Override
public void dispose() {
batch.dispose();
font.dispose();
tex.dispose();
}
#Override
public void render () {
cam.update();
batch.setProjectionMatrix(cam.combined);
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
font.draw(batch, "Test", 50, 50);
batch.draw(tex, 100, 100, tex.getWidth(), tex.getHeight(), 0, 0, tex.getWidth(), tex.getHeight(), false, true);
batch.end();
}
Important things to notice are:
The BitmapFont constructor, the boolean flips the font
For batch.draw() you need to use all those parameters because you need a boolean flipY at the end to flip the texture (I may extend SpriteBatch or make a utility method to avoid passing so many parameters all the time.)
Notice batch.setProjectionMatrix(cam.combined); in render()
Now we will see if I am back here later tonight doing edits to fix any other issues or discoveries with doing all this.

Categories