I'm working on a 2d engine. It already works quite good, but I keep getting pixel-errors.
For example, my window is 960x540 pixels, I draw a line from (0, 0) to (959, 0). I would expect that every pixel on scan-line 0 will be set to a color, but no: the right-most pixel is not drawn. Same problem when I draw vertically to pixel 539. I really need to draw to (960, 0) or (0, 540) to have it drawn.
As I was born in the pixel-era, I am convinced that this is not the correct result. When my screen was 320x200 pixels big, I could draw from 0 to 319 and from 0 to 199, and my screen would be full. Now I end up with a screen with a right/bottom pixel not drawn.
This can be due to different things:
where I expect the opengl line primitive is drawn from a pixel to a pixel inclusive, that last pixel just is actually exclusive? Is that it?
my projection matrix is incorrect?
I am under a false assumption that when I have a backbuffer of 960x540, that is actually has one pixel more?
Something else?
Can someone please help me? I have been looking into this problem for a long time now, and every time when I thought it was ok, I saw after a while that it actually wasn't.
Here is some of my code, I tried to strip it down as much as possible. When I call my line-function, every coordinate is added with 0.375, 0.375 to make it correct on both ATI and nvidia adapters.
int width = resX();
int height = resY();
for (int i = 0; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(1, 0, 0, 1));
for (int i = 1; i < height; i += 2)
rm->line(0, i, width - 1, i, vec4f(0, 1, 0, 1));
// when I do this, one pixel to the right remains undrawn
void rendermachine::line(int x1, int y1, int x2, int y2, const vec4f &color)
{
... some code to decide what std::vector the coordinates should be pushed into
// m_z is a z-coordinate, I use z-buffering to preserve correct drawing orders
// vec2f(0, 0) is a texture-coordinate, the line is drawn without texturing
target->push_back(vertex(vec3f((float)x1 + 0.375f, (float)y1 + 0.375f, m_z), color, vec2f(0, 0)));
target->push_back(vertex(vec3f((float)x2 + 0.375f, (float)y2 + 0.375f, m_z), color, vec2f(0, 0)));
}
void rendermachine::update(...)
{
... render target object is queried for width and height, in my test it is just the back buffer so the window client resolution is returned
mat4f mP;
mP.setOrthographic(0, (float)width, (float)height, 0, 0, 8000000);
... all vertices are copied to video memory
... drawing
if (there are lines to draw)
glDrawArrays(GL_LINES, (int)offset, (int)lines.size());
...
}
// And the (very simple) shader to draw these lines
// Vertex shader
#version 120
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 mP;
varying vec4 vColor;
void main(void) {
gl_Position = mP * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
// Fragment shader
#version 120
#ifdef GL_ES
precision highp float;
#endif
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor.rgb;
}
In OpenGL, lines are rasterized using the "Diamond Exit" rule. This is almost the same as saying that the end coordinate is exclusive, but not quite...
This is what the OpenGL spec has to say:
http://www.opengl.org/documentation/specs/version1.1/glspec1.1/node47.html
Also have a look at the OpenGL FAQ, http://www.opengl.org/archives/resources/faq/technical/rasterization.htm, item "14.090 How do I obtain exact pixelization of lines?". It says "The OpenGL specification allows for a wide range of line rendering hardware, so exact pixelization may not be possible at all."
Many will argue that you should not use lines in OpenGL at all. Their behaviour is based on how ancient SGI hardware worked, not on what makes sense. (And lines with widths >1 are nearly impossible to use in a way that looks good!)
Note that OpenGL coordinate space has no notion of integers, everything is a float and the "centre" of an OpenGL pixel is really at the 0.5,0.5 instead of its top-left corner. Therefore, if you want a 1px wide line from 0,0 to 10,10 inclusive, you really had to draw a line from 0.5,0.5 to 10.5,10.5.
This will be especially apparent if you turn on anti-aliasing, if you have anti-aliasing and you try to draw from 50,0 to 50,100 you may see a blurry 2px wide line because the line fell in-between two pixels.
Related
In particular I am using a Processing Java example that makes use of a GLSL shader (it's called InfiniteTiles). The original sketch is actually just moving a tiled image.
I have a uniform variable called time that I call in java.
tileShader.set("time", millis() / 1000.0);
Now in the fragment shader there is a code section
vec2 pos = gl_FragCoord.xy - vec2(TILES_COUNT_X * time);
vec2 p = (resolution - TILES_COUNT_X * pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
What I attempted to do in the java code is set the time variable such that I might be able to increase and decrease the speed at which the image scrolls.
I wrote this
float t =millis() / 1000.0;
float pctX = map (mouseX, 0, width, 0, 1);
tileShader.set("time", t*pctX);
What happens is that when I move the mouse, the entire image moves rapidly either left or right depending on where im moving as if its like 'scrubbing' the image. When i stop moving the mouse, then it will move at the desired speed.
I would like to avoid this 'scrubbing' effect and have the image scrolling speed transition smoothly with the mouse movement.
Normally I could accomplish such a thing with just drawing an image in java and scrolling it, but I think I'm not understanding something fundamental about the way glsl works to achieve the same effect on the graphics card.
Any help appreciated.
Full processing code from example:
//-------------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//-------------------------------------------------------------
PImage tileTexture;
PShader tileShader;
void setup() {
size(640, 480, P2D);
textureWrap(REPEAT);
tileTexture = loadImage("penrose.jpg");
loadTileShader();
}
void loadTileShader() {
tileShader = loadShader("scroller.glsl");
tileShader.set("resolution", float(width), float(height));
tileShader.set("tileImage", tileTexture);
}
void draw() {
tileShader.set("time", millis() / 1000.0);
shader(tileShader);
rect(0, 0, width, height);
}
Full Shader code
//---------------------------------------------------------
// Display endless moving background using a tile texture.
// Contributed by martiSteiger
//---------------------------------------------------------
uniform float time;
uniform vec2 resolution;
uniform sampler2D tileImage;
#define TILES_COUNT_X 4.0
void main() {
vec2 pos = gl_FragCoord.xy - vec2(4.0 * time);
vec2 p = (resolution - TILES_COUNT_X * pos) / resolution.x;
vec3 col = texture2D (tileImage, p).xyz;
gl_FragColor = vec4 (col, 1.0);
}
Sigh... it was a bit simpler than i thought. answer provided here by JeremyDouglass
https://forum.processing.org/two/discussion/comment/90488
solution:
"This problem isn't specific to shaders -- you would have the same problem if you were doing this with img(). You can't do clock math in this way. Multiplying anything by millis() will always create a scaling effect -- which in this case will always create what you call "scrubbing." For example, if you change the multiplier, 10 seconds suddenly becomes 15.
Instead, in order to change the speed at which the clock changes in the future but not to change how far it has advanced up-to-now, keep your own clock variable separate from millis(), and change the step amount (use addition, not multiplication) each draw frame. Now the speed at which the clock advances will change, but the base offset (the last clock time) won't jump around, because the original value isn't being scaled (multiplied)."
Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.
So I was trying to make a shader that changed the color of my crystal a little bit over time, and it all went fine until i noticed that it didn't get darker the further away it went from the light source ( default opengl lights for now! ). So I tried to tone down the color values by the distance away from the light it was but that didn't work. Later on i discovered ( by setting the color to red if the x position of the vertex in world coordinates was greater than a certain value ), that the vertex.x value was around 0. Even though it should be about 87.0.
void main()
{
vec3 vertexPosition = vec3(gl_ModelViewMatrix * vertexPos);
vec3 surfaceNormal = (gl_NormalMatrix * normals).xyz;
vec3 lightDirection = normalize(gl_LightSource[0].position.xyz - vertexPosition);
float diffuseLI = max(0.0, dot(surfaceNormal, lightDirection));
vec4 texture = texture2D(textureSample, gl_TexCoord[0]);
if(vertexPosition.x > 0)gl_FragColor.rgba = vec4(1, 0, 0, 1);
/*And so on....*/
}
As far as I know gl_ModelViewMatrix * gl_Vertex should give the world coordinates of the vertex. Am I just stupid or what?
( I also tried to do the same if statement with the light position which was correct! )
What is the most efficient way to do lighting for a tile based engine in Java?
Would it be putting a black background behind the tiles and changing the tiles' alpha?
Or putting a black foreground and changing alpha of that? Or anything else?
This is an example of the kind of lighting I want:
There are many ways to achieve this. Take some time before making your final decision. I will briefly sum up some techiques you could choose to use and provide some code in the end.
Hard Lighting
If you want to create a hard-edge lighting effect (like your example image),
some approaches come to my mind:
Quick and dirty (as you suggested)
Use a black background
Set the tiles' alpha values according to their darkness value
A problem is, that you can neither make a tile brighter than it was before (highlights) nor change the color of the light. Both of these are aspects which usually make lighting in games look good.
A second set of tiles
Use a second set of (black/colored) tiles
Lay these over the main tiles
Set the new tiles' alpha value depending on how strong the new color should be there.
This approach has the same effect as the first one with the advantage, that you now may color the overlay tile in another color than black, which allows for both colored lights and doing highlights.
Example:
Even though it is easy, a problem is, that this is indeed a very inefficent way. (Two rendered tiles per tile, constant recoloring, many render operations etc.)
More Efficient Approaches (Hard and/or Soft Lighting)
When looking at your example, I imagine the light always comes from a specific source tile (character, torch, etc.)
For every type of light (big torch, small torch, character lighting) you
create an image that represents the specific lighting behaviour relative to the source tile (light mask). Maybe something like this for a torch (white being alpha):
For every tile which is a light source, you render this image at the position of the source as an overlay.
To add a bit of light color, you can use e.g. 10% opaque orange instead of full alpha.
Results
Adding soft light
Soft light is no big deal now, just use more detail in light mask compared to the tiles. By using only 15% alpha in the usually black region you can add a low sight effect when a tile is not lit:
You may even easily achieve more complex lighting forms (cones etc.) just by changing the mask image.
Multiple light sources
When combining multiple light sources, this approach leads to a problem:
Drawing two masks, which intersect each other, might cancel themselves out:
What we want to have is that they add their lights instead of subtracting them.
Avoiding the problem:
Invert all light masks (with alpha being dark areas, opaque being light ones)
Render all these light masks into a temporary image which has the same dimensions as the viewport
Invert and render the new image (as if it was the only light mask) over the whole scenery.
This would result in something similar to this:
Code for the mask invert method
Assuming you render all the tiles in a BufferedImage first,
I'll provide some guidance code which resembles the last shown method (only grayscale support).
Multiple light masks for e.g. a torch and a player can be combined like this:
public BufferedImage combineMasks(BufferedImage[] images)
{
// create the new image, canvas size is the max. of all image sizes
int w, h;
for (BufferedImage img : images)
{
w = img.getWidth() > w ? img.getWidth() : w;
h = img.getHeight() > h ? img.getHeight() : h;
}
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint all images, preserving the alpha channels
Graphics g = combined.getGraphics();
for (BufferedImage img : images)
g.drawImage(img, 0, 0, null);
return combined;
}
The final mask is created and applied with this method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
// get alpha from color int
// be careful, an alpha mask works the other way round, so we have to subtract this from 255
int alpha = (maskPixels[i] >> 24) & 0xff;
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
As noted, this is a primitive example. Implementing color blending might be a bit more work.
Raytracing might be the simpliest approach.
you can store which tiles have been seen (used for automapping, used for 'remember your map while being blinded', maybe for the minimap etc.)
you show only what you see - maybe a monster of a wall or a hill is blocking your view, then raytracing stops at that point
distant 'glowing objects' or other light sources (torches lava) can be seen, even if your own light source doesn't reach very far.
the length of your ray gives will be used to check amount light (fading light)
maybe you have a special sensor (ESP, gold/food detection) which would be used to find objects that are not in your view? raytrace might help as well ^^
how is this done easy?
draw a line from your player to every point of the border of your map (using Bresehhams Algorithm http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
walk along that line (from your character to the end) until your view is blocked; at this point stop your search (or maybe do one last final iteration to see what did top you)
for each point on your line set the lighning (maybe 100% for distance 1, 70% for distance 2 and so on) and mark you map tile as visited
maybe you won't walk along the whole map, maybe it's enough if you set your raytrace for a 20x20 view?
NOTE: you really have to walk along the borders of viewport, its NOT required to trace every point.
i'm adding the line algorithm to simplify your work:
public static ArrayList<Point> getLine(Point start, Point target) {
ArrayList<Point> ret = new ArrayList<Point>();
int x0 = start.x;
int y0 = start.y;
int x1 = target.x;
int y1 = target.y;
int sx = 0;
int sy = 0;
int dx = Math.abs(x1-x0);
sx = x0<x1 ? 1 : -1;
int dy = -1*Math.abs(y1-y0);
sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2; /* error value e_xy */
for(;;){ /* loop */
ret.add( new Point(x0,y0) );
if (x0==x1 && y0==y1) break;
e2 = 2*err;
if (e2 >= dy) { err += dy; x0 += sx; } /* e_xy+e_x > 0 */
if (e2 <= dx) { err += dx; y0 += sy; } /* e_xy+e_y < 0 */
}
return ret;
}
i did this whole lightning stuff some time ago, a* pathfindin feel free to ask further questions
Appendum:
maybe i might simply add the small algorithms for raytracing ^^
to get the North & South Border Point just use this snippet:
for (int x = 0; x <map.WIDTH; x++){
Point northBorderPoint = new Point(x,0);
Point southBorderPoint = new Point(x,map.HEIGHT);
rayTrace( getLine(player.getPos(), northBorderPoint), player.getLightRadius()) );
rayTrace( getLine(player.getPos(), southBorderPoint, player.getLightRadius()) );
}
and the raytrace works like this:
private static void rayTrace(ArrayList<Point> line, WorldMap map, int radius) {
//int radius = radius from light source
for (Point p: line){
boolean doContinue = true;
float d = distance(line.get(0), p);
//caclulate light linear 100%...0%
float amountLight = (radius - d) / radius;
if (amountLight < 0 ){
amountLight = 0;
}
map.setLight( p, amountLight );
if ( ! map.isViewBlocked(p) ){ //can be blockeb dy wall, or monster
doContinue = false;
break;
}
}
}
I've been into indie game development for about three years right now. The way I would do this is first of all by using OpenGL so you can get all the benefits of the graphical computing power of the GPU (hopefully you are already doing that). Suppose we start off with all tiles in a VBO, entirely lit. Now, there are several options of achieving what you want. Depending on how complex your lighting system is, you can choose a different approach.
If your light is going to be circular around the player, no matter the fact if obstacles would block the light in real life, you could choose for a lighting algorithm implemented in the vertex shader. In the vertex shader, you could compute the distance of the vertex to the player and apply some function that defines how bright things should be in function of the computed distance. Do not use alpha, but just multiply the color of the texture/tile by the lighting value.
If you want to use a custom lightmap (which is more likely), I would suggest to add an extra vertex attribute that specifies the brightness of the tile. Update the VBO if needed. Same approach goes here: multiply the pixel of the texture by the light value. If you are filling light recursively with the player position as starting point, then you would update the VBO every time the player moves.
If your lightmap depends on where the sunlight hits your level, you could combine two sort of lighting techniques. Create one vertex attribute for the sun brightness and another vertex attribute for the light emitted by light points (like a torch held by the player). Now you can combine those two values in the vertex shader. Suppose the your sun comes up and goes down like the day and night pattern. Let's say the sun brightness is sun, which is a value between 0 and 1. This value can be passed to the vertex shader as a uniform. The vertex attribute that represents the sun brightness is s and the one for light, emitted by light points is l. Then you could compute the total light for that tile like this:
tileBrightness = max(s * sun, l + flicker);
Where flicker (also a vertex shader uniform) is some kind of waving function that represents the little variants in the brightness of your light points.
This approach makes the scene dynamic without having to recreate continuously VBO's. I implemented this approach in a proof-of-concept project. It works great. You can check out what it looks like here: http://www.youtube.com/watch?v=jTcNitp_IIo. Note how the torchlight is flickering at 0:40 in the video. That is done by what I explained here.
So I created a vertex shader that takes in an angle and calculates the rotation. There is a problem though that the model rotates around the world center and not its own axis/origin.
Side note: This is 2D rotation.
How do I make the model rotate through its own axis?
Here is my current vertex shader:
#version 150 core
in vec4 in_Position;
in vec4 in_Color;
in vec2 in_TextureCoord;
out vec4 pass_Color;
out vec2 pass_TextureCoord;
void main(void) {
gl_Position = in_Position;
pass_Color = in_Color;
pass_TextureCoord = in_TextureCoord;
}
Rotating CPU side:
Vector3f center = new Vector3f(phyxBody.getPosition().x,phyxBody.getPosition().y,0);
Matrix4f pos = new Matrix4f();
pos.m00 = (phyxBody.getPosition().x)-(getWidth()/30f/2f);
pos.m01 = (phyxBody.getPosition().y)+(getHeight()/30f/2f);
pos.m10 = (phyxBody.getPosition().x)-(getWidth()/30f/2f);
pos.m11 = (phyxBody.getPosition().y)-(getHeight()/30f/2f);
pos.m20 = (phyxBody.getPosition().x)+(getWidth()/30f/2f);
pos.m21 = (phyxBody.getPosition().y)-(getHeight()/30f/2f);
pos.m30 = (phyxBody.getPosition().x)+(getWidth()/30f/2f);
pos.m31 = (phyxBody.getPosition().y)+(getHeight()/30f/2f);
pos.rotate(phyxBody.getAngle(),center);
Result is a weird rotated stretch of the object.. Do you know why? Don't worry about the /30f part.
phyxBody is an instance of the class Body from the JBox2D library.
phyxBody.getAngle() is in raidians.
Matrix4f is a class from the LWJGL library.
EDIT:
Vector3f center = new Vector3f(0,0,0);
Matrix4f pos = new Matrix4f();
pos.m00 = -(getWidth()/30f/2f);
pos.m01 = +(getHeight()/30f/2f);
pos.m10 = -(getWidth()/30f/2f);
pos.m11 = -(getHeight()/30f/2f);
pos.m20 = +(getWidth()/30f/2f);
pos.m21 = -(getHeight()/30f/2f);
pos.m30 = +(getWidth()/30f/2f);
pos.m31 = +(getHeight()/30f/2f);
pos.rotate(phyxBody.getAngle(),center);
pos.m00 += phyxBody.getPosition().x;
pos.m01 += phyxBody.getPosition().y;
pos.m10 += phyxBody.getPosition().x;
pos.m11 += phyxBody.getPosition().y;
pos.m20 += phyxBody.getPosition().x;
pos.m21 += phyxBody.getPosition().y;
pos.m30 += phyxBody.getPosition().x;
pos.m31 += phyxBody.getPosition().y;
This is currently the transformation code, yet the rotation still doesn't work correctly.
My try at the rotate method: (What am I doing wrong?)
if (phyxBody.getAngle() != 0.0) {
pos.m00 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
pos.m01 *= Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m10 *= -Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m11 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
pos.m20 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
pos.m21 *= Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m30 *= -Math.sin(Math.toDegrees(phyxBody.getAngle()));
pos.m31 *= Math.cos(Math.toDegrees(phyxBody.getAngle()));
}
The order is scaling * rotation * translation - see this question. I'm guessing you've already translated your coordinates outside of your shader. You'll have to rotate first, then translate. It's good to know the linear algebra behind what you're doing so you know why things work or don't work.
The typical way to do this is to pass a pre-computed ModelView matrix that has already taken care of scaling/rotation/translation. If you've already translated your vertices, you can't fix the problem in your shader without needlessly undoing it and then redoing it after. Send in your vertices untranslated and accompany them with data, like your angle, to translate them. Or you can translate and rotate both beforehand. It depends on what you want to do.
Bottom line: You must rotate before you translate.
Here is the typical way you do vertex transformations:
OpenGL side:
Calculate ModelView matrix: Scale * Rotation * Translation
Pass to shader as a uniform matrix
GLSL side:
Multiply vertices by ModelView matrix in vertex shader
Send to gl_Position
Response to Edit:
I'm inclined to think your implementation needs to be completely redone. You have points that belong to a model. These points are all oriented around the origin. For example, if you had a car, the points would form a mesh of triangles.
If you simply do not translate these points and then rotate them, the car will rotate around its center. If you translate afterwards, the car will translate in its rotated fashion to the place you've specified. The key here is that the origin of your model lines up with the origin of rotation so you end up rotating the model "around itself."
If you instead translate to the new position and then rotate, your model will rotate as if it were orbiting the origin. This is probably not what you want.
If you're modifying the actual vertex positions directly instead of using transformation matrices, you're doing it wrong. Even if you just have a square, leave the coordinates at (-1,-1) (-1,1) (1,1) (1,-1) (notice how the center is at (0,0)) and translate them to where you want to be.
You don't have to re-implement math functionality and probably shouldn't (unless your goal is explicitly to do so). GLM is a popular math library that does everything you want and it's tailored specifically for OpenGL.
Final Edit
Here is a beautiful work of art I drew for you demonstrating what you need to do.
Notice how in the bottom right the model has been swept out around the world origin about 45 degrees. If we went another 45, it would have its bottom edge parallel to the X-axis and intersecting the positive Y-axis with the blue vertex in the bottom left and purple vertex in the bottom right.
You should probably review how to work with vertices, matrices, and shaders. Vertices should be specified once, matrices should be updated every time you chance the scale, rotation, or position of the object, and shaders should multiply the each vertex in the model by a uniform (constant).
Your sprite lacks sufficient information to be able to do what you're trying to do. In order to compute a rotation about a point, you need to know what that point is. And you don't.
So if you want to rotate about an arbitrary location, you will need to pass that location to your shader. Once there, you subtract it from your positions, rotate the position, and add it back in. However, that would require a lot of work, which is why you should just compute a matrix on the CPU to do all of that. Your shader would be given this matrix and perform the transform itself.
Of course, that itself requires something else, because you keep updating the position of these objects by offsetting the vertices on the CPU. This is not good; you should be keeping these objects relative to their origin in the buffer. You should then transform them to their world-position as part of their matrix.
So your shader should be taking object-relative coordinates, and it should be passed a matrix that does a rotation followed by a translation to their world-space position. Actually, scratch that; the matrix should transform to their final camera-space position (world-space is always a bad idea).