JOGL - Texture shows blue scale - java

I'm trying to make a game in Java OpenGL (JOGL), but I have a problem with textures.
When I draw a quad with a texture, the only thing I see is the image in blue scale.
The following is my code:
Texture grass;
// public void init() in Base class that implements GLEventListener
try {
grass = TextureIO.newTexture(new File("src/com/jeroendonners/main/grass.png"), false);
} catch(Exception e) {
e.printStackTrace();
}
To render a quad with this texture I use the following code:
int x = 0;
int y = 0;
gl.glColor3f(1f, 1f, 1f);
this.grass.bind(gl);
gl.glBegin(gl.GL_QUADS);
gl.glTexCoord2d(0, 0);
gl.glVertex3f(0, 0, 0);
gl.glTexCoord2d(1, 0);
gl.glVertex3f(1, 0, 0);
gl.glTexCoord2d(1, 1);
gl.glVertex3f(1, 1, 0);
gl.glTexCoord2d(0, 1);
gl.glVertex3f(0, 1, 0);
gl.glEnd();
I have read here that I have to use GL_BGR instead of the default GL_RGB, but since that question initializes textures in a different way, I don't know what to do with it.
Maybe a note: I am using an old version of JOGL, 1.0 I think. That's because I had a course on school with this version.

Related

Libgdx Fog of War (Texture blending), Framebuffer bug

i try to get a simple fog of war for my strategy game.
I fail at the moment on framebuffer drawing.
Im an absolute openGL beginner.
New Problem 1;
Contructor:
int width = Gdx.graphics.getWidth();
int height = Gdx.graphics.getHeight();
this.fbo = new FrameBuffer(Pixmap.Format.RGB565, width, height, false);
this.fboRegion = new TextureRegion(fbo.getColorBufferTexture(), 0, 0, 1000, 1000); // 1000,1000 = Test mapsize
fboRegion.flip(false, true);
Units.camera.setCam(new OrthographicCamera(fbo.getWidth(), fbo.getHeight()));
OrthographicCamera cam = Units.camera.getCam();
cam.position.set(fbo.getWidth() / 2, fbo.getWidth() / 2, 0);
cam.update();
Render part:
public void draw(float delta)
{
Gdx.gl.glClearColor(0, 0, 0, 1.0f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
Units.camera().update();
fbo.begin();
Gdx.gl.glClearColor(0.1f, 0.1f, 0.1f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(Units.camera().getCam().combined);
batch.begin();
background.draw(batch);
this.unitDraw.draw(batch, delta);
this.shotDraw.draw(batch, delta);
batch.end();
fbo.end();
batch.begin();
batch.draw(fboRegion, 0, 0);
batch.end();
}
Problem:
The camera move wrong. The framebuffer and the textureregion move different speed.
My objects have a low resolution on the texture region because its scaling. But i need scaling my object pictures. How increase the resolution without scaling images to original size ? Scaling Batch before write in framebuffer ? or is this a camera problem ?
See: https://www.youtube.com/watch?v=sMGe1Sgh5JA&feature=youtu.be
The last post in this thread (http://badlogicgames.com/forum/viewtopic.php?f=11&t=4207) has the same problem.

LWJGL Texture not stretching to Quads

I have been trying for hours to get a Texture in LWJGL to stretch to a quad.
Here is the code I am using for the quad:
private static void renderLoad() {
glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
texture.bind();
glPushMatrix();{
glBegin(GL_QUADS);
{
glTexCoord2f(0, 1);
glVertex2f(0, 0); //Upper-left
glTexCoord2f(1, 1);
glVertex2f(Display.getWidth(), 0); //Upper-right
glTexCoord2f(1, 0);
glVertex2f(Display.getWidth(), Display.getHeight()); //Bottom-right
glTexCoord2f(0, 0);
glVertex2f(0, Display.getHeight()); //Bottom-left
}
glEnd();
}glPopMatrix();
}
This is what the display looks like when I run it:
http://gyazo.com/376ddb0979c55226d2f63c26215a1e12
I am trying to make the image expand to the size of the window. The Quad is at the size of the window, but the texture seems to not stretch.
Here is what it looks like if I do not use a texture and I simple make the quad a color:
http://gyazo.com/65f21fe3efa2d3948de69b55d5c85424
If it helps here is my main loop:
glMatrixMode(GL_PROJECTION);
glViewport(0, 0, displaySizeX, displaySizeY);
glLoadIdentity();
glOrtho(0, displaySizeX, 0, displaySizeY, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
texture = loadLoadingImage();
//This is the main loop for the game.
while(!Display.isCloseRequested()){
delta = getDelta();
updateFPS();
if(Display.wasResized()){
displaySizeX = Display.getWidth();
displaySizeY = Display.getHeight();
glViewport(0, 0, displaySizeX, displaySizeY);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, displaySizeX, 0, displaySizeY, -1, 1);
}
render();
checkInput();
Display.update();
Display.sync(sync);
}
cleanUp();
return true;
How do I make the image stretch to the quad?
public void stretch() {
Color.white.bind();
texture.bind
GL11.glBegin(GL11.GL_QUADS);
GL11.glTexCoord2f(0,0);
GL11.glVertex2f(100,100);
GL11.glTexCoord2f(1,0);
GL11.glVertex2f(100+texture.getTextureWidth(),100);
GL11.glTexCoord2f(1,1);
GL11.glVertex2f(100+texture.getTextureWidth(),100+character.getTextureHeight());
GL11.glTexCoord2f(0,1);
GL11.glVertex2f(100,100+texture.getTextureHeight());
GL11.glEnd(); // all the 0's were originally 100 but it was off centered
}
texture = TextureLoader.getTexture("PNG",ResourceLoader.getResourceAsStream("res/texture.png"));
Try using this. This is usually how I do this.
Perhaps something is modifying the texture matrix. You could try adding a
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
to see if that affects anything.

lwjgl 2d rendering conflict between two tetxures

Ok so I am currently working on a 2d game in java with LWJGL. I have a fairly solid understanding of java and how it works and know the basics of how games work and LWJGL/openGL, but I am having this really weird issue with rendering textures. I determined that one of my draw methods here is the culprit ..
public static void texturedTriangleInverted(float x, float y, float width, float height, Texture texture) {
GL11.glPushMatrix();
{
GL11.glTranslatef(x, y, 0);
texture.bind();
GL11.glBegin(GL11.GL_TRIANGLES);
{
GL11.glVertex2f(width / 2, 0);
GL11.glTexCoord2f(width / 2, 0);
GL11.glVertex2f(0, height);
GL11.glTexCoord2f(0, 1);
GL11.glVertex2f(width, height);
GL11.glTexCoord2f(1, 1);
}
GL11.glEnd();
}
GL11.glPopMatrix();
}
so what happens is if I render anything with this method the next thing rendered looks like it literally was compressed on one end and stretched on the other, even if the next thing rendered is not rendered with this method. I am almost positive that it has to do with the argument I passed into the Gl11.glTexCoord2f(float x, float y) methods, but I cant figure out how to fix it.
here is my openGL initialization code
private void initGL() {
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, Strings.DISPLAY_WIDTH, 0, Strings.DISPLAY_HEIGHT, -1, 1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glEnable(GL11.GL_TEXTURE_2D);
GL11.glEnable(GL11.GL_TEXTURE_BINDING_2D);
GL11.glViewport(0, 0, Strings.DISPLAY_WIDTH, Strings.DISPLAY_HEIGHT);
GL11.glClearColor(0, 0, 1, 0);
GL11.glDisable(GL11.GL_DEPTH_TEST);
}
my game loop code
private void gameLoop() {
while (!Display.isCloseRequested()) {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT);
GL11.glLoadIdentity();
GL11.glTranslatef(Strings.transX, Strings.transY, 0);
this.level.update();
this.level.render();
Display.update();
Display.sync(Strings.FPS_CAP);
}
Keyboard.destroy();
Display.destroy();
}
my texture loading code (note I used slick to load my textures)
public static final Texture loadTexture(String location) {
try {
if (textureExists(location)) {
return TextureLoader.getTexture("png", new BufferedInputStream(new FileInputStream(new File(location))), false);
} else {
System.err.println("txture does not exist");
}
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
You need to specify your glTexCoord before the glVertex it refers to, not after. This is the same as with glColor and glNormal, glVertex uses the last attributes that you set.

OpenGL(LWJGL) TileMap only displays one tile

I have been recently learning LWJGL since I've been with Java for some time now, and I've learned that since there aren't very many LWJGL tutorials/reference material, I just use search OpenGL tutorials and since I know that LWJGL is like a Java port of OpenGL (I think that's how you'd describe) they'd be basically the exact same, except I always have to tweak it a bit, and I made this code (basically all by myself) and when I run it, it only displays one tile map, but it should display 16 tiles in all! Why is this?
package testandothertutorials;
import static org.lwjgl.opengl.GL11.*;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import org.lwjgl.LWJGLException;
import org.lwjgl.input.Mouse;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import org.newdawn.slick.opengl.Texture;
import org.newdawn.slick.opengl.TextureLoader;
public class TileMapTest {
int tilemap[][] = {
{ 0, 1, 1, 0 },
{ 0, 1, 1, 0 },
{ 0, 1, 1, 0 },
{ 1, 0, 0, 1 }
};
int TILE_SIZE = 32;
int WORLD_SIZE = 4;
Texture stone_texture, dirt_texture;
public TileMapTest() {
try {
Display.setDisplayMode(new DisplayMode(640, 480));
Display.setTitle("Game");
Display.create();
} catch(LWJGLException e) {
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, 640, 480, 0, 1, -1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
//Load the stone and dirt textures before the render loop
try {
stone_texture = TextureLoader.getTexture("PNG", new FileInputStream(new File("C://Users//Gannon//Desktop//Java//workspace//Test Game//res//stone.png")));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
try {
dirt_texture = TextureLoader.getTexture("PNG", new FileInputStream(new File("C://Users//Gannon//Desktop//Java//workspace//Test Game//res//dirt.png")));
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
while(!Display.isCloseRequested()) {
glClear(GL_COLOR_BUFFER_BIT);
drawTiles();
Display.update();
Display.sync(60);
}
Display.destroy();
}
public void drawTiles() {
for(int x = 0; x < WORLD_SIZE; x++) {
for(int y = 0; y < WORLD_SIZE; y++) {
if(tilemap[x][y] == 0) { //If the selected tile in the tilemap equals 0, set it to the stone texture to draw
stone_texture.bind();
} else if(tilemap[x][y] == 1) { //If the selected tile equals 1, set it to the dirt texture to draw
dirt_texture.bind();
}
glPushMatrix();
glTranslatef(x, y, 0);
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(32, 0);
glTexCoord2f(1, 1);
glVertex2f(32, 32);
glTexCoord2f(0, 1);
glVertex2f(0, 32);
glEnd();
glPopMatrix();
}
}
}
public static void main(String args[]) {
new TileMapTest();
}
}
Try using glpushmatrix() and glpopmatrix(), currently your GL_QUADS are reletive to the last one drawn and so the positioning get far apart the higher x and y do:
glPushMatrix();
glTranslatef(x, y, 0);
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(32, 0);
glTexCoord2f(1, 1);
glVertex2f(32, 32);
glTexCoord2f(0, 1);
glVertex2f(0, 32);
glEnd();
glPopMatrix();
This will allow each quad to be back in world space coordinates instead of the last drawn quad - I had the same problem once.
Also loading identity should only be done at the start of each new frame and try loading both textures outside the loop and choosing inside, loading for each tile is a real waste on the hard-drive, make use of RAM.
Your problem occurs, because your not translating the tiles enough, so you end up rendering all the tiles on top of each other!
Currently your doing glTranslatef(x, y, 0); and remember that your tiles have a 32 pixels width and height, but the range that you translate in is only 0 to 4 (since you only render 16 tiles) you need to change your translation.
This is how you should translate.
glTranslatef(x * TILE_SIZE, y * TILE_SIZE, 0);
So at the rendering part it ends up looking like this.
glPushMatrix();
glTranslatef(x * TILE_SIZE, y * TILE_SIZE, 0);
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(1, 0);
glVertex2f(32, 0);
glTexCoord2f(1, 1);
glVertex2f(32, 32);
glTexCoord2f(0, 1);
glVertex2f(0, 32);
glEnd();
glPopMatrix();

Simple JOGL game running very slowly on a gtx 470

I've been making a game for my computer science class. For simplicity, I've just been making a set of minigames. For fun, I tried to make a version of the classic Snake game in 3d. The physics and collision detection works fine, and on the school computers (medium quality macs) the game runs very smoothly. However, on my home computer, it runs at 8 fps. My home computer runs on a gtx 470 with the latest drivers, and a query in the program confirms that the code is running on a gtx 470 with opengl 4.2.
Here's the render code (running in GLCanvas)
GL2 gl = ( drawable.getGL()).getGL2();
/*System.out.println(gl.glGetString(GL.GL_VENDOR)+"\n"+
gl.glGetString(GL.GL_RENDERER)+"\n"+
gl.glGetString(GL.GL_VERSION));*/
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
//Init camera
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();
// Perspective.
float widthHeightRatio = (float) getWidth() / (float) getHeight();
glu.gluPerspective(75, widthHeightRatio, 1, 2000);
double dX, dY, dZ;
if (player.locs.size()==0)
{
dX=0.1*player.vel.x;
dY=0.1*player.vel.y;
dZ=0.1*player.vel.z;
}
else
{
dX=player.xHead-player.locs.get(0).x;
dY=player.yHead-player.locs.get(0).y;
dZ=player.zHead-player.locs.get(0).z;
}
player.up.normalizeDist();
double xPos=4*dX-0.1*player.up.x;
double yPos=4*dY-0.1*player.up.y;
double zPos=4*dZ-0.1*player.up.z;
double desiredDist=0.2;
double totalDist=Math.sqrt(xPos*xPos+yPos*yPos+zPos*zPos);
xPos=xPos*desiredDist/totalDist;
yPos=yPos*desiredDist/totalDist;
zPos=zPos*desiredDist/totalDist;
double camX=player.xHead-xPos;
double camY=player.yHead-yPos;
double camZ=player.zHead-zPos;
glu.gluLookAt(xWidth*(camX), yWidth*(camY),zWidth*(camZ), xWidth*(player.xHead+2*dX), yWidth*(player.yHead+2*dY), zWidth*(player.zHead+2*dZ), player.up.x, player.up.y, -player.up.z);
// Change back to model view matrix.
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
float SHINE_ALL_DIRECTIONS = 1;
float[] lightPos = {xWidth/2, yWidth/2, zWidth/2, SHINE_ALL_DIRECTIONS};
float[] lightColorAmbient = {0.2f, 0.2f, 0.2f, 0.2f};
float[] lightColorSpecular = {0.8f, 0.8f, 0.8f, 0.8f};
// Set light parameters.
gl.glLightfv(GL2.GL_LIGHT1, GL2.GL_POSITION, lightPos, 0);
gl.glLightfv(GL2.GL_LIGHT1, GL2.GL_AMBIENT, lightColorAmbient, 0);
gl.glLightfv(GL2.GL_LIGHT1, GL2.GL_SPECULAR, lightColorSpecular, 0);
// Enable lighting in GL.
gl.glEnable(GL2.GL_LIGHT1);
gl.glEnable(GL2.GL_LIGHTING);
// Set material properties.
float[] rgba = {1f, 1f, 1f};
gl.glMaterialfv(GL2.GL_FRONT, GL2.GL_AMBIENT, rgba, 0);
gl.glMaterialfv(GL2.GL_FRONT, GL2.GL_SPECULAR, rgba, 0);
gl.glMaterialf(GL2.GL_FRONT, GL2.GL_SHININESS, 0.5f);
/*gl.glMaterialfv(GL.GL_BACK, GL.GL_AMBIENT, rgba, 0);
gl.glMaterialfv(GL.GL_BACK, GL.GL_SPECULAR, rgba, 0);
gl.glMaterialf(GL.GL_BACK, GL.GL_SHININESS, 0.5f);*/
// gl.glColor3f(1f,1f,1f);
if (camX>0)
{
gl.glBegin(GL2.GL_POLYGON);
gl.glNormal3d(1,0,0);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(0, 0, zWidth);
gl.glVertex3d(0, yWidth, zWidth);
gl.glVertex3d(0, yWidth, 0);
gl.glEnd();
}
if (camY>0)
{
gl.glBegin(GL2.GL_POLYGON);
gl.glNormal3d(0, 1, 0);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(0, 0, zWidth);
gl.glVertex3d(xWidth, 0, zWidth);
gl.glVertex3d(xWidth, 0, 0);
gl.glEnd();
}
if (camZ>0)
{
gl.glBegin(GL2.GL_POLYGON);
gl.glNormal3d(0, 0, 1);
gl.glVertex3d(0, 0, 0);
gl.glVertex3d(xWidth, 0, 0);
gl.glVertex3d(xWidth, yWidth, 0);
gl.glVertex3d(0, yWidth, 0);
gl.glEnd();
}
if (camX<1)
{
gl.glBegin(GL2.GL_POLYGON);
gl.glNormal3d(-1, 0, 0);
gl.glVertex3d(xWidth, 0, 0);
gl.glVertex3d(xWidth, 0, zWidth);
gl.glVertex3d(xWidth, yWidth, zWidth);
gl.glVertex3d(xWidth, yWidth, 0);
gl.glEnd();
}
if (camY<1)
{
gl.glBegin(GL2.GL_POLYGON);
gl.glNormal3d(0, -1, 0);
gl.glVertex3d(0, yWidth, 0);
gl.glVertex3d(0, yWidth, zWidth);
gl.glVertex3d(xWidth, yWidth, zWidth);
gl.glVertex3d(xWidth, yWidth, 0);
gl.glEnd();
}
if (camZ<1)
{
gl.glBegin(GL2.GL_POLYGON);
gl.glNormal3d(0, 0, 1);
gl.glVertex3d(0, 0, zWidth);
gl.glVertex3d(xWidth, 0, zWidth);
gl.glVertex3d(xWidth, yWidth, zWidth);
gl.glVertex3d(0, yWidth, zWidth);
gl.glEnd();
}
player.draw(xWidth, yWidth, zWidth, drawable, glu);
for (int i=0; i<bullets.size(); i++)
{
bullets.get(i).draw(drawable, glu, xWidth, yWidth, zWidth);
}
for (int i=0; i<basicEntities.size(); i++)
{
basicEntities.get(i).draw( xWidth, yWidth, zWidth, drawable, glu);
}
And then a lot of copy pasted calls to code like this: (xHead, yHead, and zHead are coordinates)
GL gl=drawable.getGL();
GL2 gl2=gl.getGL2();
gl2.glPushMatrix();
gl2.glTranslated(xHead*xWidth, yHead*yWidth, zHead*zWidth);
float[] rgba = {0.3f, 0.5f, 1f};
gl2.glMaterialfv(GL.GL_FRONT, GL2.GL_AMBIENT, rgba, 0);
gl2.glMaterialfv(GL.GL_FRONT, GL2.GL_SPECULAR, rgba, 0);
gl2.glMaterialf(GL.GL_FRONT, GL2.GL_SHININESS, 0.5f);
GLUquadric head = glu.gluNewQuadric();
glu.gluQuadricDrawStyle(head, GLU.GLU_FILL);
glu.gluQuadricNormals(head, GLU.GLU_FLAT);
glu.gluQuadricOrientation(head, GLU.GLU_OUTSIDE);
final float radius = (float) (dotSize*xWidth);
final int slices = 32;
final int stacks = 32;
glu.gluSphere(head, radius, slices, stacks);
glu.gluDeleteQuadric(head);
gl2.glPopMatrix();
Edit: I can get the game to run faster by reducing the number of slices and stacks in the quadrics, but this makes the game rather ugly.
Also, I removed the a.add(this) (from the animator) and the game still runs. Was I animating everything twice? It's still slow though.
I can't fully explain why it runs so much better on your school computer, but the way you are using OpenGL is an ancient way and is terrible for performance.
Using glBegin to draw will always be very expensive, because it must send every single vertex as a separate API call, which is bad for performance. You should instead look into rendering with Vertex Arrays (good) or Vertex Buffer Objects (better in most cases). Using these will require a slight shift in thinking, but I'm sure you can find many tutorials using those search terms.
I'm also not an expert on what glu does, though your use of gluSphere and gluQuadrics also makes me suspicious. Most of the work of the glu functions are probably not executed on the graphics card, so maybe every time you call gluSphere the CPU must recompute all the vertices of the sphere before it can do anything with the GPU. A much better solution would be to generate your own list of sphere vertices, upload it to GPU as a VBO, and then just execute the VBO draw call anytime you want to draw a sphere. That should save a huge amount of computation time.

Categories