Libgdx's Matrix4#translate() doesn't work as expected - java

I'm trying to draw a NinePatch using a transform matrix so it can be scaled, rotated, moved etc. So I created a class that inherits from LibGDX's NinePatch class and which is responsible of the matrix.
This is how I compute my transform matrix (I update it each time one of the following values changes) :
this.transform
.idt()
.translate(originX, originY, 0)
.rotate(0, 0, 1, rotation)
.scale(scale, scale, 1)
.translate(-originX, -originY, 0)
;
and how I render my custom NinePatch class :
drawConfig.begin(Mode.BATCH);
this.oldTransform.set(drawConfig.getTransformMatrix());
drawConfig.setTransformMatrix(this.transform);
this.draw(drawConfig.getBatch(), this.x, this.y, this.width, this.height); // Libgdx's NinePatch#draw()
drawConfig.setTransformMatrix(this.oldTransform);
Case 1
Here's what I get when I render 4 nine patches with :
Position = 0,0 / Origin = 0,0 / Scale = 0.002 / Rotation = different for each 9patch
I get what I expect to.
Case 2
Now the same 4 nine patches with :
Position = 0,0 / Origin = 0.5,0.5 / Scale = same / Rotation = same
You can see that my 9 patches aren't draw at 0,0 (their position) but at 0.5,0.5 (their origin), like if I had no .translate(-originX, -originY, 0) when computing the transform matrix. Just to be sure, I commented this instruction and I indeed get the same result. So why is my 2nd translation apparently not taken into account?

The problem is probably your scaling. Because it also scales down the translation, your seccond translate actually translates (-originX*scale, -originY*scale, 0) since scale=0.002, it looks like there is no translate at all. For instance for the x coordinate, you compute :
x_final = originX + scale * (-originX + x_initial)

I had to change the code computing my transform matrix to take the scale into account when translating back as pointed by Guillaume G. except my code is different from his :
this.transform
.idt()
.translate(originX, originY, 0)
.rotate(0, 0, 1, rotation)
.scale(scale, scale, 1)
.translate(-originX / scale, -originY / scale, 0);
;

Related

glTranslatef and mouse clicks - gluUnProject

Please see bottom of question for the current solution I have gone for, thanks to Finlaybob, elect, gouessej
An appeal to the Elders of OpenGL.... I am having big problems with detecting the relative position of a mouse click on my textured plane.
I am making a game where I am drawing a single large square and texturing it with a large generated map texture. The view is always top down and you can only currently move the X Y and Z coordinates of that square.
Screenshot of the map
OpenGL init
screenRatio = (float)screenW / (float)screenH;
System.out.println("init");
glu = new GLU();
GL2 gl2 = drawable.getGL().getGL2();
gl2.glShadeModel( GL2.GL_SMOOTH );
gl2.glHint( GL2.GL_PERSPECTIVE_CORRECTION_HINT, GL2.GL_NICEST );
gl2.glClearColor( 0f, 0f, 0f, 1f );
gl2.glDepthMask(false);
gl2.glEnable(GL2.GL_DEPTH_TEST);
Set camera position
gl2.glViewport(0, 0, 1024, 768);
gl2.glMatrixMode( GL2.GL_PROJECTION );
gl2.glLoadIdentity();
glu.gluPerspective( 45, screenRatio, 1, 100 );
glu.gluLookAt( 0, 0, 3, 0, 0, 0, 0, 1, 0 );
gl2.glMatrixMode(GL2.GL_MODELVIEW);
gl2.glLoadIdentity();
Move position to start drawing the map
// typical camera coord example:
// CENTRE: 0.0f, 0.0f, 10f
// FULL ZOOM OUT AND TOP LEFT: -25f, 25f, 40f
// move position
gl2.glTranslatef( -cameraX, -cameraY, -cameraZ );
I suspect the glTranslatef z-coord may be a suspect. As I am drawing the square 40f ( for example ) away from the origin
Map vertex information
// here are the coordinates/dimensions of my textured square ( my map )
float[] vertexArray = {
-25f, 25f,
25f, 25f,
25f, -25f,
25f, -25f,
};
Mouse click position calculation
"Borrowed" from java-tips 1628-how-to-use-gluunproject-in-jogl.html
int x = mouse.getX(), y = mouse.getY();
int viewport[] = new int[4];
double mvmatrix[] = new double[16];
double projmatrix[] = new double[16];
int realy = 0;
double wcoord[] = new double[4];
gl2.glGetIntegerv(GL2.GL_VIEWPORT, viewport, 0);
gl2.glGetDoublev(GL2.GL_MODELVIEW_MATRIX, mvmatrix, 0);
gl2.glGetDoublev(GL2.GL_PROJECTION_MATRIX, projmatrix, 0);
realy = viewport[3] - (int) y - 1;
glu.gluUnProject(
(double) x,
(double) realy,
0.0, // I have experimented with having this as 1.0 also
mvmatrix, 0,
projmatrix, 0,
viewport, 0,
wcoord, 0
);
Experimenting with the near/far bit ( 3rd param of gluUnProject ) seems to produce a better effect but there seems to be no sweet spot ( the best I found was 0.945 )
I would very much like mCX, mCY to be relative to the rendered map coordinates ( -25f - 25f ) regardless of Z position
mCX = (float)wcoord[0];
mCY = (float)wcoord[1];
Draw a rectangle at the translated coordinates
gl2.glColor3f(1.f, 0.f, 0.f);
gl2.glBegin(GL2.GL_QUADS);
gl2.glVertex2f( mCX-0.1f, mCY+0.1f );
gl2.glVertex2f( mCX+0.1f, mCY+0.1f );
gl2.glVertex2f( mCX+0.1f, mCY-0.1f );
gl2.glVertex2f( mCX-0.1f, mCY-0.1f );
gl2.glEnd();
Currently the coordinates work well in relation to x & y translation, if I click the very centre of the screen it will draw a box approximately in the correct place regardless of my glTranslatef movement. If I click away from the centre of the screen I see an exponential offset.
Demonstration of exponential offset
When I click the very dead centre of the screen it will draw this mauve square exactly around the mouse point, but with the smallest of movement it will create the following effect:
Fully zoomed in, click a couple of pixels right of centre
UPDATE AND WORKING... FOR NOW
At the time of generating the texture for my map I also generate an alternative texture which represents each "tile" as a different colour. In my initial and current attempt the colour of this tile is a function of it's X and Y coordinates ( a map is made up of 100 tiles across and 100 tiles down, so the x+y coordinates range from 0 - 99 )
I end up with a texture which looks like a gradient from green to red. The below code will, at the time of a mouse click, quickly render this texture ( imperceptible to user ) and read the rgb value under the mouse. We then turn that rgb value into a world coordinate and BOOM... the relative coordinates of my map are realised.
float pX, pY;
// render a colourised version of the scene for the purposes of "picking"
// https://www.opengl.org/archives/resources/faq/technical/selection.htm
public void pick ( GL2 gl2 ) {
// DRAW PICKING SCENE
gl2.glClearBufferfv(GL2.GL_COLOR, 0, clearColor);
gl2.glClearBufferfv(GL2.GL_DEPTH, 0, clearDepth);
gl2.glTranslatef( -cameraX, -cameraY, -cameraZ );
// draw my map but use the colour gradient texture
for ( Entity e : this.entities ) {
e.drawPick( gl2 );
}
// not sure what this does #cargo-cult
gl2.glFlush();
gl2.glFinish();
gl2.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
// After rendering ask OpenGL to read the colour of the screen at the given window coordinates!
FloatBuffer buffer = FloatBuffer.allocate(4);
int realy = 0;
int viewport[] = new int[4];
gl2.glGetIntegerv(GL2.GL_VIEWPORT, viewport, 0);
realy = viewport[3] - (int) mouse.getY() - 1;
gl2.glReadPixels( mouse.getX(), realy, 1, 1, GL2.GL_RGBA, GL2.GL_FLOAT, buffer);
float[] pixels = new float[3];
pixels = buffer.array();
// pixels holds rgb values respectively
// convert the red + green values back into x + y values
pX = (pixels[0] * 255) - 25f;
pY = -((pixels[1] * 255) - 25f);
// draw the proper texture
for ( Entity e : this.entities ) {
e.draw( gl2 );
}
}
You've almost got it. You're going to need a good value for Z in the unproject function though.
What you are trying to do is take the position of the cursor and multiply by a matrix to give a point in "3d space". Your matrices are likely 4x4 or 4x3, so you need a 4 component vector. (x,y,z,w)
When you draw your map, the existing point is multiplied by 1 or more matrices including the projection matrix. ( e.g. -25.0f,25.0f,0.0f,1.0f - actually a 3d point). When this is multiplied by all matrices, the GPU essentially gets back a value in normalised device coordinates (NDC) (between -1 and 1 in all axes) for that vertex.
To do the opposite and unproject you'll need to have a valid/good value for Z. The reason is that in NDC everything that is drawn is in -1,1 on all axes, to get everything in (further away things are squashed a bit). This is how you get flickering and weirdness if you have a huge > 100000 zFar distance for example, it still has to fit into -1,1.
The best way to do this is to use the depth buffer, by capturing the depth value it'll give you a good approxomation of the z coordinate in NDC, which you can pass to the unproject call.
The reason why 0.945 is the sweet spot is probably dependent on how far the camera is from your map or vice versa. It's usually the case that the depth buffer has much more detail closer to the near plane than the far - it's not linear.
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/ has a good visual near the bottom of the page, and is a good resource for intro to matrices in general:
You can see the distortion caused by moving to NDC. This is required for viewing from a perspactive viewpoint, but you need to take it into consideration when you transform backward too.
Colour picking as mentioned is also viable for picking, but will still require some work. Because you have a single object, you'll have to render each texel of the image with a different colour, output that to a separate colour buffer, check to see what colour is on the buffer and somehow relate that to a point in space. It could probably be done though, but I'd say colour picking is more suited to multiple objects.
From what I've read - the depth buffer one might be more suitable for you as it's one object, and the depth buffer will give you a Z coordinate for every point you click on. It could still be on your far plane, but it will still give you a value.
Alternatively, as suggested by #elect use an orthographic projection.

Bullets not getting shot out of the gun

I'm making a little game just for fun and I got stuck making the bullets come out of the gun. In the code below, the direction of the player is a degree angle called rot.
float gunOffsetX = 106, gunOffsetY = 96;
double angle = Math.toRadians(rot); // convert direction of player from degrees to radians for sin and cos
x = getX(); // player X
y = getY(); // player Y
float bulletX = (float) (x + (gunOffsetX * Math.cos(angle) - gunOffsetY * Math.sin(angle)));
float bulletY = (float) (y + (gunOffsetX * Math.sin(angle) + gunOffsetY * Math.cos(angle)));
Instances.fire.add(new Fire(bulletX, bulletY, rot, weapon));
Also tried:
bulletX = (float) (x + Math.cos(angle + Math.atan2(gunOffsetX, gunOffsetY)) * Point2D.distance(0, 0, gunOffsetX, gunOffsetY));
But same results
Supposedly the bullets should spawn at the end of the gun but this isn't the case as you can see in the following gif..
Any help appreciated
One big issue (at least in my opinion) is how this game handles anchor points of shapes, like the player.
We can highlight the anchor point by drawing a little red rectangle on its place:
g.setColor(Color.RED);
g.drawRect((int)player.getX() -5, (int)player.getY() -5, 10, 10);
This comes into the Draw#renderGame(Graphics2D) method, so it looks like:
private void renderGame(Graphics2D g) {
g.rotate(Math.toRadians(player.rot), player.getX()+64, player.getY()+64);
g.drawImage(player.getCurrentFrame(), (int)player.getX(), (int)player.getY(), player.getWidth(), player.getHeight(), null);
g.setColor(Color.RED);
g.drawRect((int)player.getX() -5, (int)player.getY() -5, 10, 10);
g.rotate(-Math.toRadians(player.rot), player.getX()+64, player.getY()+64);
//...
then we'll see that the anchor point is not in the center of the image:
As you can see, the anchor point (the original (0,0) point before the rotation) isn't in the center of the image and the crosshair is related to it, instead of the view of the player.
This happens due to the shifting operation while the player rotation:
g.rotate(Math.toRadians(player.rot), player.getX()+64, player.getY()+64);
//...
g.rotate(-Math.toRadians(player.rot), player.getX()+64, player.getY()+64);
You're shifting the postion with +64. I suggest to remove that and add the shifting to the g.drawImage call instead, so the anchor point is correctly in the center (mind that I avoided the fixed value 64):
g.rotate(Math.toRadians(player.rot), player.getX(), player.getY());
g.drawImage(player.getCurrentFrame(), (int)player.getX() - (player.getWidth() / 2), (int)player.getY() - (player.getHeight() / 2), player.getWidth(), player.getHeight(), null);
g.rotate(-Math.toRadians(player.rot), player.getX(), player.getY());
Then you now fire the gun you'll see that the bullet always "starts" from a certain position from the player. The problem here is the incorrect offset you used. The proper values are:
float gunOffsetX = 35, gunOffsetY = 29;
(I got them by trial and error, so you may adjust them a bit more, if you like)
Now it looks like this:
As you can see, the shot is still a bit misplaced, but this happens due to the incorrect rotation of the bullet (like you did it for the player shape):
g.rotate(Math.toRadians(f.rot), f.getX()+f.getWidth()/2, f.getY()+f.getHeight()/2);
g.drawImage(f.img, (int)f.getX(), (int)f.getY(), f.getWidth(), f.getHeight(), null);
g.rotate(-Math.toRadians(f.rot), f.getX()+f.getWidth()/2, f.getY()+f.getHeight()/2);
It should look like this (without any X or Y adjustments):
g.rotate(Math.toRadians(f.rot), f.getX(), f.getY());
g.drawImage(f.img, (int)f.getX(), (int)f.getY(), f.getWidth(), f.getHeight(), null);
g.rotate(-Math.toRadians(f.rot), f.getX(), f.getY());
The end result is:
The player now correctly looks at the crosshair and the shots are placed in front of the gun.
If you like to fire directly through the center of the crosshair, you'll only need to adjust the player position and the bullet offset a bit.
Player (in Draw#renderGame(Graphics2D)):
g.drawImage(player.getCurrentFrame(), (int)player.getX() - (player.getWidth() / 2), (int)player.getY() - (player.getHeight() / 2) - 30, player.getWidth(), player.getHeight(), null);
(mind the -30 in (int)player.getY() - (player.getHeight() / 2) - 30)
Bullet:
float gunOffsetX = 35, gunOffsetY = 0;
Now the bullet travels right through the crosshair (mind that the red rectangle is right on the weapon):
(I'm a bit too stupid to create proper GIF files, so I can only provide pictures)
Now you have the necessary offset values to get the result you want, but you should definitely try to understand why the values are like they are right now. You need to replace them later with dynamic values, since different weapons need different offsets for the bullet, because the player image differs. It should be helpful to have some kind of class with instances for each weapon type, which contains the images and the coordinates where the weapon barrel is located in the image. Then you can use these coordinates to correctly set the offsets for the bullet image.

What is the source of these pixel gaps in between identical vertices in OpenGL's Ortho? How can I eliminate them?

Despite passing equal (exactly equal) coordinates for 'adjacent' edges, I'm ending up with some strange lines between adjacent elements when scaling my grid of rendered tiles.
My tile grid rendering algorithm accepts scaled tiles, so that I can adjust the grid's visual size to match a chosen window size of the same aspect ratio, among other reasons. It seems to work correctly when scaled to exact integers, and a few non-integer values, but I get some inconsistent results for the others.
Some Screenshots:
The blue lines are the clear color showing through. The chosen texture has no transparent gaps in the tilesheet, as unused tiles are magenta and actual transparency is handled by the alpha layer. The neighboring tiles in the sheet have full opacity. Scaling is achieved by setting the scale to a normalized value obtained through a gamepad trigger between 1f and 2f, so I don't know what actual scale was applied when the shot was taken, with the exception of the max/min.
Attribute updates and entity drawing are synchronized between threads, so none of the values could have been applied mid-draw. This isn't transferred well through screenshots, but the lines don't flicker when the scale is sustained at that point, so it logically shouldn't be an issue with drawing between scale assignment (and thread locks prevent this).
Scaled to 1x:
Scaled to A, 1x < Ax < Bx :
Scaled to B, Ax < Bx < Cx :
Scaled to C, Bx < Cx < 2x :
Scaled to 2x:
Projection setup function
For setting up orthographic projection (changes only on screen size changes):
.......
float nw, nh;
nh = Display.getHeight();
nw = Display.getWidth();
GL11.glOrtho(0, nw, nh, 0, 1, -1);
orthocenter.setX(nw/2); //this is a Vector2, floats for X and Y, direct assignment.
orthocenter.setY(nh/2);
.......
For the purposes of the screenshot, nw is 512, nh is 384 (implicitly casted from int). These never change throughout the example above.
General GL drawing code
After cutting irrelevant attributes that didn't fix the problem when cut:
#Override
public void draw(float xOffset, float yOffset, float width, float height,
int glTex, float texX, float texY, float texWidth, float texHeight) {
GL11.glLoadIdentity();
GL11.glTranslatef(0.375f, 0.375f, 0f); //This is supposed to fix subpixel issues, but makes no difference here
GL11.glTranslatef(xOffset, yOffset, 0f);
if(glTex != lastTexture){
GL11.glBindTexture(GL11.GL_TEXTURE_2D, glTex);
lastTexture = glTex;
}
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(texX,texY + texHeight);
GL11.glVertex2f(-height/2, -width/2);
GL11.glTexCoord2f(texX + texWidth,texY + texHeight);
GL11.glVertex2f(-height/2, width/2);
GL11.glTexCoord2f(texX + texWidth,texY);
GL11.glVertex2f(height/2, width/2);
GL11.glTexCoord2f(texX,texY);
GL11.glVertex2f(height/2, -width/2);
GL11.glEnd();
}
Grid drawing code (dropping the same parameters dropped from 'draw'):
//Externally there is tilesize, which contains tile pixel size, in this case 32x32
public void draw(Engine engine, Vector2 offset, Vector2 scale){
int xp, yp; //x and y position of individual tiles
for(int c = 0; c<width; c++){ //c as in column
xp = (int) (c*tilesize.a*scale.getX()); //set distance from chunk x to column x
for(int r = 0; r<height; r++){ //r as in row
if(tiles[r*width+c] <0) continue; //skip empty tiles ('air')
yp = (int) (r*tilesize.b*scale.getY()); //set distance from chunk y to column y
tileset.getFrame(tiles[r*width+c]).draw( //pull 'tile' frame from set, render.
engine, //drawing context
new Vector2(offset.getX() + xp, offset.getY() + yp), //location of tile
scale //scale of tiles
);
}
}
}
Between the tiles and the platform specific code, vectors' components are retrieved and passed along to the general drawing code as pasted earlier.
My analysis
Mathematically, each position is an exact multiple of the scale*tilesize in either the x or y direction, or both, which is then added to the offset of the grid's location. It is then passed as an offset to the drawing code, which translates that offset with glTranslatef, then draws a tile centered at that location through halving the dimensions then drawing each plus-minus pair.
This should mean that when tile 1 is drawn at, say, origin, it has an offset of 0. Opengl then is instructed to draw a quad, with the left edge at -halfwidth, right edge at +halfwidth, top edge at -halfheight, and bottom edge at +halfheight. It then is told to draw the neighbor, tile 2, with an offset of one width, so it translates from 0 to that width, then draws left edge at -halfwidth, which should coordinate-wise be exactly the same as tile1's right edge. By itself, this should work, and it does. When considering a constant scale, it breaks somehow.
When a scale is applied, it is a constant multiple across all width/height values, and mathematically shouldn't make anything change. However, it does make a difference, for what I think could be one of two reasons:
OpenGL is having issues with subpixel filling, ie filling left of a vertex doesn't fill the vertex's containing pixel space, and filling right of that same vertex also doesn't fill the vertex's containing pixel space.
I'm running into float accuracy problems, where somehow X+width/2 does not equal X+width - width/2 where width = tilewidth*scale, tilewidth is an integer, and X is a float.
I'm not really sure about how to tell which one is the problem, or how to remedy it other than to simply avoid non-integer scale values, which I'd like to be able to support. The only clue I think might apply to finding the solution is how the pattern of line gaps isn't really consistant (see how it skips tiles in some cases, only has vertical or horizontal but not both, etc). However, I don't know what this implies.
This looks like it's probably a floating point precision issue. The critical statement in your question is this:
Mathematically, each position is an exact multiple [..]
While that's mathematically true, you're dealing with limited floating point precision. Sequences of operations that should mathematically produce the same result can (and often do) produce slightly different results due to rounding errors during expression evaluation.
Specifically in your case, it looks like you're relying on identities of this form:
i * width + width/2 == (i + 1) * width - width/2
This is mathematically correct, but you can't expect to get exactly the same numbers when evaluating the values with limited floating point precision. Depending on how the small errors end up getting rounded to pixels, it can result in visual artifacts.
The only good way to avoid this is that you actually use the same values for coordinates that must be the same, instead of using calculations that mathematically produce the same results.
In the case of coordinates on a grid, you could calculate the coordinates for each grid line (tile boundary) once, and then use those values for all draw operations. Say if you have n tiles in the x-direction, you calculate all the x-values as:
x[i] = i * width;
and then when drawing tile i, use x[i] and x[i + 1] as the left and right x-coordinates.

Efficient 2D Tile based lighting system

What is the most efficient way to do lighting for a tile based engine in Java?
Would it be putting a black background behind the tiles and changing the tiles' alpha?
Or putting a black foreground and changing alpha of that? Or anything else?
This is an example of the kind of lighting I want:
There are many ways to achieve this. Take some time before making your final decision. I will briefly sum up some techiques you could choose to use and provide some code in the end.
Hard Lighting
If you want to create a hard-edge lighting effect (like your example image),
some approaches come to my mind:
Quick and dirty (as you suggested)
Use a black background
Set the tiles' alpha values according to their darkness value
A problem is, that you can neither make a tile brighter than it was before (highlights) nor change the color of the light. Both of these are aspects which usually make lighting in games look good.
A second set of tiles
Use a second set of (black/colored) tiles
Lay these over the main tiles
Set the new tiles' alpha value depending on how strong the new color should be there.
This approach has the same effect as the first one with the advantage, that you now may color the overlay tile in another color than black, which allows for both colored lights and doing highlights.
Example:
Even though it is easy, a problem is, that this is indeed a very inefficent way. (Two rendered tiles per tile, constant recoloring, many render operations etc.)
More Efficient Approaches (Hard and/or Soft Lighting)
When looking at your example, I imagine the light always comes from a specific source tile (character, torch, etc.)
For every type of light (big torch, small torch, character lighting) you
create an image that represents the specific lighting behaviour relative to the source tile (light mask). Maybe something like this for a torch (white being alpha):
For every tile which is a light source, you render this image at the position of the source as an overlay.
To add a bit of light color, you can use e.g. 10% opaque orange instead of full alpha.
Results
Adding soft light
Soft light is no big deal now, just use more detail in light mask compared to the tiles. By using only 15% alpha in the usually black region you can add a low sight effect when a tile is not lit:
You may even easily achieve more complex lighting forms (cones etc.) just by changing the mask image.
Multiple light sources
When combining multiple light sources, this approach leads to a problem:
Drawing two masks, which intersect each other, might cancel themselves out:
What we want to have is that they add their lights instead of subtracting them.
Avoiding the problem:
Invert all light masks (with alpha being dark areas, opaque being light ones)
Render all these light masks into a temporary image which has the same dimensions as the viewport
Invert and render the new image (as if it was the only light mask) over the whole scenery.
This would result in something similar to this:
Code for the mask invert method
Assuming you render all the tiles in a BufferedImage first,
I'll provide some guidance code which resembles the last shown method (only grayscale support).
Multiple light masks for e.g. a torch and a player can be combined like this:
public BufferedImage combineMasks(BufferedImage[] images)
{
// create the new image, canvas size is the max. of all image sizes
int w, h;
for (BufferedImage img : images)
{
w = img.getWidth() > w ? img.getWidth() : w;
h = img.getHeight() > h ? img.getHeight() : h;
}
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
// paint all images, preserving the alpha channels
Graphics g = combined.getGraphics();
for (BufferedImage img : images)
g.drawImage(img, 0, 0, null);
return combined;
}
The final mask is created and applied with this method:
public void applyGrayscaleMaskToAlpha(BufferedImage image, BufferedImage mask)
{
int width = image.getWidth();
int height = image.getHeight();
int[] imagePixels = image.getRGB(0, 0, width, height, null, 0, width);
int[] maskPixels = mask.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < imagePixels.length; i++)
{
int color = imagePixels[i] & 0x00ffffff; // Mask preexisting alpha
// get alpha from color int
// be careful, an alpha mask works the other way round, so we have to subtract this from 255
int alpha = (maskPixels[i] >> 24) & 0xff;
imagePixels[i] = color | alpha;
}
image.setRGB(0, 0, width, height, imagePixels, 0, width);
}
As noted, this is a primitive example. Implementing color blending might be a bit more work.
Raytracing might be the simpliest approach.
you can store which tiles have been seen (used for automapping, used for 'remember your map while being blinded', maybe for the minimap etc.)
you show only what you see - maybe a monster of a wall or a hill is blocking your view, then raytracing stops at that point
distant 'glowing objects' or other light sources (torches lava) can be seen, even if your own light source doesn't reach very far.
the length of your ray gives will be used to check amount light (fading light)
maybe you have a special sensor (ESP, gold/food detection) which would be used to find objects that are not in your view? raytrace might help as well ^^
how is this done easy?
draw a line from your player to every point of the border of your map (using Bresehhams Algorithm http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm
walk along that line (from your character to the end) until your view is blocked; at this point stop your search (or maybe do one last final iteration to see what did top you)
for each point on your line set the lighning (maybe 100% for distance 1, 70% for distance 2 and so on) and mark you map tile as visited
maybe you won't walk along the whole map, maybe it's enough if you set your raytrace for a 20x20 view?
NOTE: you really have to walk along the borders of viewport, its NOT required to trace every point.
i'm adding the line algorithm to simplify your work:
public static ArrayList<Point> getLine(Point start, Point target) {
ArrayList<Point> ret = new ArrayList<Point>();
int x0 = start.x;
int y0 = start.y;
int x1 = target.x;
int y1 = target.y;
int sx = 0;
int sy = 0;
int dx = Math.abs(x1-x0);
sx = x0<x1 ? 1 : -1;
int dy = -1*Math.abs(y1-y0);
sy = y0<y1 ? 1 : -1;
int err = dx+dy, e2; /* error value e_xy */
for(;;){ /* loop */
ret.add( new Point(x0,y0) );
if (x0==x1 && y0==y1) break;
e2 = 2*err;
if (e2 >= dy) { err += dy; x0 += sx; } /* e_xy+e_x > 0 */
if (e2 <= dx) { err += dx; y0 += sy; } /* e_xy+e_y < 0 */
}
return ret;
}
i did this whole lightning stuff some time ago, a* pathfindin feel free to ask further questions
Appendum:
maybe i might simply add the small algorithms for raytracing ^^
to get the North & South Border Point just use this snippet:
for (int x = 0; x <map.WIDTH; x++){
Point northBorderPoint = new Point(x,0);
Point southBorderPoint = new Point(x,map.HEIGHT);
rayTrace( getLine(player.getPos(), northBorderPoint), player.getLightRadius()) );
rayTrace( getLine(player.getPos(), southBorderPoint, player.getLightRadius()) );
}
and the raytrace works like this:
private static void rayTrace(ArrayList<Point> line, WorldMap map, int radius) {
//int radius = radius from light source
for (Point p: line){
boolean doContinue = true;
float d = distance(line.get(0), p);
//caclulate light linear 100%...0%
float amountLight = (radius - d) / radius;
if (amountLight < 0 ){
amountLight = 0;
}
map.setLight( p, amountLight );
if ( ! map.isViewBlocked(p) ){ //can be blockeb dy wall, or monster
doContinue = false;
break;
}
}
}
I've been into indie game development for about three years right now. The way I would do this is first of all by using OpenGL so you can get all the benefits of the graphical computing power of the GPU (hopefully you are already doing that). Suppose we start off with all tiles in a VBO, entirely lit. Now, there are several options of achieving what you want. Depending on how complex your lighting system is, you can choose a different approach.
If your light is going to be circular around the player, no matter the fact if obstacles would block the light in real life, you could choose for a lighting algorithm implemented in the vertex shader. In the vertex shader, you could compute the distance of the vertex to the player and apply some function that defines how bright things should be in function of the computed distance. Do not use alpha, but just multiply the color of the texture/tile by the lighting value.
If you want to use a custom lightmap (which is more likely), I would suggest to add an extra vertex attribute that specifies the brightness of the tile. Update the VBO if needed. Same approach goes here: multiply the pixel of the texture by the light value. If you are filling light recursively with the player position as starting point, then you would update the VBO every time the player moves.
If your lightmap depends on where the sunlight hits your level, you could combine two sort of lighting techniques. Create one vertex attribute for the sun brightness and another vertex attribute for the light emitted by light points (like a torch held by the player). Now you can combine those two values in the vertex shader. Suppose the your sun comes up and goes down like the day and night pattern. Let's say the sun brightness is sun, which is a value between 0 and 1. This value can be passed to the vertex shader as a uniform. The vertex attribute that represents the sun brightness is s and the one for light, emitted by light points is l. Then you could compute the total light for that tile like this:
tileBrightness = max(s * sun, l + flicker);
Where flicker (also a vertex shader uniform) is some kind of waving function that represents the little variants in the brightness of your light points.
This approach makes the scene dynamic without having to recreate continuously VBO's. I implemented this approach in a proof-of-concept project. It works great. You can check out what it looks like here: http://www.youtube.com/watch?v=jTcNitp_IIo. Note how the torchlight is flickering at 0:40 in the video. That is done by what I explained here.

How to rotate an object in Java 3D?

I have a Cone I drew in Java 3D with the following code:
Cone cone = new Cone(2f, 3f);
Transform3D t3d = new Transform3D();
TransformGroup coneTransform = new TransformGroup(t3d);
coneTransform.setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
t3d.setTranslation(new Vector3f(0f,0f,0f);
coneTransform.setTransform(t3d);
coneTransform.addChild(cone);
this.addChild(coneTransform);
Suppose I have the cone sitting at point (1,1,1) and I want the tip of the cone to point down an imaginary line running through (0,0,0) and (1,1,1)... how can I do this?
Here's an example of what I've been trying:
Transform3D t3d = new Transform3D();
Vector3f direction = new Vector3f(1,2,1);
final double angleX = direction.angle(new Vector3f(1,0,0));
final double angleY = direction.angle(new Vector3f(0,1,0));
final double angleZ = direction.angle(new Vector3f(0,0,1));
t3d.rotX(angleX);
t3d.rotY(angleY);
t3d.rotZ(angleZ);
t3d.setTranslation(direction);
coneTransform.setTransform(t3d);
Thanks in advance for all help!
I'm just learning Java 3D myself at the moment, and from my current knowledge, the rotation methods set the transform to a rotation about that axis only.
Therefore, if you wish to perform rotations about multiple axes, then you will need to use a second Transform3D.
ie:
Transform3D rotation = new Transform3D();
Transform3D temp = new Transform3D();
rotation.rotX(Math.PI/2);
temp.rotZ(Math.PI/2);
rotation.mul(temp); // multiply the 2 transformation matrices together.
As for the reason for Math.PI, this is because it uses radians instead of degrees, where Math.PI is equivalent to 180 degrees.
Finding the angle between your current orientation and your intended orientation isn't too hard - you could use Vector3fs, with the angle() method. A Vector would be set up with the initial orientation, and another in the intended.
However, this doesn't tell you in which axes the angle lies. Doing so would require examination of the vectors to see which segments are set. [of course, there may be something that I am currently unaware of in the API]
This is not a java3D specific answer.
In general a matrix can be built such that there are 4 vectors that describe it.
1) A side (or lateral) vector
2) An up vector
3) A direction vector
4) A position
Each row of a 4x4 matrix.
Thus for a simple identity matrix we have the following matrix (I'll define a column major matrix, for a row major matrix all you need to do is swap the matrix indices around such that row 2 col 3 becomes row 3 col 2 throughout the matrix).
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
in this the first column is the side vector. The second column the up vector. The third the direction and the fourth the position.
Logically we can see that the vector (1, 0, 0, 0) points along the x axis (and thus is the side vector). The vector (0, 1, 0, 0) points along the y axis (and thus is the up vector). The third (0, 0, 1, 0) points along the Z-axis (and thus is the direction vector). The fourth (0, 0, 0, 1) indicates that the objects does not move at all.
Now lets say we wanted to face along the X-axis.
Obviously that would mean we have a vector of (1, 0, 0, 0 ) for our direction vector. Up would still be (0, 1, 0, 0) and position still 0, 0, 0 1. So what would our side vector be? Well, logically it would point along the z-axis. But which way? Well hold your fingers such that one finger points forward, one to the side and one up. Now rotate so that the forward finger is facing the same direction as the side pointing finger. Which way is the side pointing finger pointing now? The opposite direction to the original direction pointing finger. Thus the matrix is
0 0 1 0
0 1 0 0
-1 0 0 0
0 0 0 1
At this point things seemingly get a little more complicated. It is simple enough to take an arbitrary position and an arbitrary point to look at (I'll call them vPos and vFocus). It is easy enough to form a vector from vPos to vFocus by subtracting vPos from vFocus (vFocus.x - vPos.x, vFocus.y - vPos.y, vFocus.z - vPos.z, vFocus.w - vPos.w ). Bear in mind all positions should be defined with a '1' in the w position where all directions should have a '0'. This is automatically taken care of when you do the subtraction above as the 1 in both ws will cancel out and leave 0. Anyway, we now have a vector pointing from the position towards vFocus we'll call it vDir. Unfortunately it has the length of the difference between vPos and vFocus. However if we divide the vDir vector by its length (vDir.x / length, vDir.y / length, vDir.z / length, vDir.w / length) then we normalise it and we have a direction with a total length of 1.
At this ponit we now have our 3rd and 4th columns of our matrix. Now, lets assuem up is still (0, 1, 0, 0) or vUp. We can assume that the crossproduct of the direction and vUp will produce a vector that is perpendicular (and also of unit length) to the plane formed by vDir and vUp. This gives us our side vector or vLat. Now .. we did kind of assume the up vector so its not strictly correct. We can now calculate it exactly by taking the cross product of vLat and vDir and we have all 4 vectors.
The final matrix is thus defined as follows
vLat.x vUp.x vDir.x vPos.x
vLat.y vUp.y vDir.y vPos.y
vLat.z vUp.z vDir.z vPos.z
vLat.w vUp.w vDir.w vPos.w
This isn't strictly the full answer as you will get problems as you look towards a point near to your (0, 1, 0, 0) vector but that should work for most cases.
I finally figured out what I wanted to do by using Quaternions, which I learned about here: http://www.cs.uic.edu/~jbell/Courses/Eng591_F1999/outline_2.html Here's my solution.
Creating the cone:
private void attachCone(float size) {
Cone cone = new Cone(size, size* 2);
// The group for rotation
arrowheadRotationGroup = new TransformGroup();
arrowheadRotationGroup.
setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
arrowheadRotationGroup.addChild(cone);
// The group for positioning the cone
arrowheadPositionGroup = new TransformGroup();
arrowheadPositionGroup.
setCapability(TransformGroup.ALLOW_TRANSFORM_WRITE);
arrowheadPositionGroup.addChild(arrowheadRotationGroup);
super.addChild(arrowheadPositionGroup);
}
Now, when I want to rotate the cone to point in a certain direction specified as the vector from the point (0,0,0) to (direction.x, direction.y, direction.z), I use:
private final Vector3f yAxis = new Vector3f(0f, 1f, 0f);
private Vector3f direction;
private void rotateCone() {
// Get the normalized axis perpendicular to the direction
Vector3f axis = new Vector3f();
axis.cross(yAxis, direction);
axis.normalize();
// When the intended direction is a point on the yAxis, rotate on x
if (Float.isNaN(axis.x) && Float.isNaN(axis.y) && Float.isNaN(axis.z))
{
axis.x = 1f;
axis.y = 0f;
axis.z = 0f;
}
// Compute the quaternion transformations
final float angleX = yAxis.angle(direction);
final float a = axis.x * (float) Math.sin(angleX / 2f);
final float b = axis.y * (float) Math.sin(angleX / 2f);
final float c = axis.z * (float) Math.sin(angleX / 2f);
final float d = (float) Math.cos(angleX / 2f);
Transform3D t3d = new Transform3D();
Quat4f quat = new Quat4f(a, b, c, d);
t3d.set(quat);
arrowheadRotationGroup.setTransform(t3d);
Transform3D translateToTarget = new Transform3D();
translateToTarget.setTranslation(this.direction);
arrowheadPositionGroup.setTransform(translateToTarget);
}
I think this should do it:
coneTransform.rotX(Math.PI / 4);
coneTransform.rotY(Math.PI / 4);
you can give your Transform3D a rotation matrix. you can get a rotation matrix using Rotation matrix calculator online: http://toolserver.org/~dschwen/tools/rotationmatrix.html here's my example:
Matrix3f mat = new Matrix3f(0.492403876506104f, 0.586824088833465f,
-0.642787609686539f, 0.413175911166535f, 0.492403876506104f,
0.766044443118978f, 0.766044443118978f, -0.642787609686539f, 0f);
Transform3D trans = new Transform3D();
trans.set(mat);

Categories