I'm trying to write real-time raytracer.
I use Java and Jogamp bindings of OpenGL and OpenCL for it (calls Jogl and Jocl).
I already have raytracing code in my .cl kernel and its works well. I get output as FloatBuffer and pass it to the OpenGL texture via glTexImage2D. Now I want to go realtime, and to achive this I want to remove FloatBuffer copy which happens twice in my program (first - from OpenCL kernel result to RAM, and second from RAM to OpenGL texture). Obvious there is a way to point OpenCL buffer from OpenGL texture direct, cause all calculations works on GPU.
I know that there is cl_khr_gl_sharing extention for OpenCL which do what I want. But I can't understand how to use this in Java Jogamp bindings (jocl/jogl). Can somebody helps me or give some sample JAVA code (not C++ which is really differs in details)?
So, after few days of research I find how to do it. Posting an answer for anybody who interest.
In "init" method of Jogl's GLEventListener you create GL context. You must create CL context in that method too.
My sample code for this:
public void init(GLAutoDrawable drawable) {
GL4 gl4 = drawable.getGL().getGL4();
gl4.glDisable(GL4.GL_DEPTH_TEST);
gl4.glEnable(GL4.GL_CULL_FACE);
gl4.glCullFace(GL4.GL_BACK);
buildScreenVAO(gl4);
FloatBuffer pixelBuffer = GLBuffers.newDirectFloatBuffer(width * height * 4);
this.textureIndex = GLUtils.initTexture(gl4, width, height, pixelBuffer);
this.samplerIndex = GLUtils.initSimpleSampler(gl4);
if (clContext == null) {
try {
gl4.glFinish();
this.clContext = CLGLContext.create(gl4.getContext());
this.clDevice = clContext.getMaxFlopsDevice();
//if (device.getExtensions().contains("cl_khr_gl_sharing"))
this.clCommandQueue = clDevice.createCommandQueue();
this.clProgram = clContext.createProgram(new FileInputStream(new File(ResourceLocator.getInstance().kernelsPath + "raytracer.cl"))).build(); // load sources, create and build program
this.clKernel = clProgram.createCLKernel("main");
this.clTexture = (CLGLTexture2d<FloatBuffer>) clContext.createFromGLTexture2d(GL4.GL_TEXTURE_2D, textureIndex, 0, Mem.WRITE_ONLY);
this.viewTransform = clContext.createFloatBuffer(16 * 4, Mem.READ_ONLY);
this.w = clContext.createFloatBuffer(1, Mem.READ_ONLY);
clKernel.putArg(clTexture).putArg(width).putArg(height).putArg(viewTransform).putArg(w);
fillViewTransform(viewTransform);
fillW(w);
clCommandQueue.putWriteBuffer(viewTransform, false);
clCommandQueue.putWriteBuffer(w, false);
clCommandQueue.putAcquireGLObject(clTexture);
clCommandQueue.put1DRangeKernel(clKernel, 0, width * height, 0);
clCommandQueue.putReleaseGLObject(clTexture);
} catch (Exception e) {
e.printStackTrace();
}
}
buildShaderProgram(gl4);
bindObjects(gl4);
}
The core line is: clContext.createFromGLTexture2d(GL4.GL_TEXTURE_2D, textureIndex, 0, Mem.WRITE_ONLY);
You should create an OpenCL texture object for your previous created OpenGL texture. Code of creating OpenGL texture:
gl4.glGenTextures(1, indexBuffer);
int textureIndex = indexBuffer.get();
indexBuffer.clear();
gl4.glBindTexture(GL4.GL_TEXTURE_2D, textureIndex);
gl4.glTexParameterf(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MIN_FILTER, GL4.GL_LINEAR);
gl4.glTexParameterf(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MAG_FILTER, GL4.GL_LINEAR);
gl4.glTexImage2D(GL4.GL_TEXTURE_2D, 0, GL4.GL_RGBA32F, width, height, 0, GL4.GL_RGBA, GL4.GL_FLOAT, pixelBuffer); //TODO
gl4.glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_BASE_LEVEL, 0);
gl4.glTexParameteri(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_MAX_LEVEL, 0);
int[] swizzle = new int[] { GL4.GL_RED, GL4.GL_GREEN, GL4.GL_BLUE, GL4.GL_ONE };
gl4.glTexParameterIiv(GL4.GL_TEXTURE_2D, GL4.GL_TEXTURE_SWIZZLE_RGBA, swizzle, 0);
gl4.glBindTexture(GL4.GL_TEXTURE_2D, 0);
return textureIndex;
And the last - you must use right data type of texture argument in your OpenCL kernel. In my case kernel method has the following signature:
kernel void main(write_only image2d_t dst, const uint width, const uint height, global float* viewTransform, global float* w){
and I use write_imagef build-in OpenCL method to write float data (0.0f - 1.0f) into this texture.
Feel free to ask me about this approach if you interested.
Related
I am using OpenGL ES 3.2 and the NVIDIA driver 361.00 on a Pixel C tablet with Tegra X1 GPU. I would like to use a compute shader to write data to a colour map and then later I will use some graphics shaders to display the image.
I already have this concept working using desktop GL and now I want to port to mobile. I am implementing the GL in Java rather than in native code. I extend GLSurfaceView and the GLSurfaceView.Renderer and then during the OnSurfaceCreated callback I initialise the shader programs and textures etc.
The compute shader compiles just fine without any errors:
#version 310 es
layout(binding = 0, rgba32f) uniform highp image2D colourMap;
layout(local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main()
{
imageStore(colourMap, ivec2(gl_GlobalInvocationID.xy), vec4(1.0f, 0.0f, 0.0f, 1.0f));
}
And I initialise a texture
// Generate a 2D texture
GLES31.glGenTextures(1, colourMap, 0);
GLES31.glBindTexture(GLES31.GL_TEXTURE_2D, colourMap[0]);
// Set interpolation to nearest
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MAG_FILTER, GLES31.GL_LINEAR);
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MIN_FILTER, GLES31.GL_LINEAR);
// Create some dummy texture to begin with so we can see if it changes
float texData[] = new float[texWidth * texHeight * 4];
for (int j = 0; j < texHeight; j++)
{
for (int i = 0; i < texWidth; i++)
{
// Set a few pixels in here...
}
}
Buffer texDataBuffer = FloatBuffer.wrap(texData);
GLES31.glTexImage2D(GLES31.GL_TEXTURE_2D, 0, GLES31.GL_RGBA32F, texWidth, texHeight, 0, GLES31.GL_RGBA, GLES31.GL_FLOAT, texDataBuffer);
After this I set the image unit in the shader here: EDIT: I don't do this now but just assume it will be assigned automatically when the shader program is created as per solidpixel's answer.
GLES31.glUseProgram(idComputeShaderProgram);
int loc = GLES31.glGetUniformLocation(idComputeShaderProgram, "colourMap");
if (loc == -1) Log.e("Error", "Cannot locate variable");
GLES31.glUniform1i(loc, 0);
After every call to GL I check for errors using GLES31.glGetError() -- left out here for clarity.
EDIT: When I dispatch compute I bind the image texture but first query the unit assignment:
GLES31.glUseProgram(idComputeShaderProgram);
int[] unit = new int[1];
GLES31.glGetUniformiv(idComputeShaderProgram, GLES31.glGetUniformLocation(idComputeShaderProgram, "colourMap"), unit, 0);
GLES31.glBindImageTexture(unit[0], velocityMap[0], 0, false, 0, GLES31.GL_WRITE_ONLY, GLES31.GL_RGBA32F);
This final line is the one which errors now. The error code translates to GL_INVALID_OPERATION. The shader compiles correctly and the program object is valid and active. The location of the variable is also valid. I have even used glGetActiveUniform() to get the type of the variable and it returns a type of 36941 which translates to GL_IMAGE_2D which I believe is an integer.
I still think I'm misunderstanding something here but not sure what.
You can't assign your own unit identities for images. See OpenGL ES 3.2 specification section 7.6.
An INVALID_OPERATION error is generated if any of the following
conditions occur:
an image uniform is loaded with any of the Uniform* commands.
You need to query the automatic unit assignment using glGetUniformiv(prog, loc, &unit) to get the unit name.
I am a newbie in OpenGL programming. I am making a java program with OpenGL. I drew many cubes inside. I now wanted to implement a screenshot function in my program but I just couldn't make it work. The situation is as follow :
I used FPSanimator to refresh my drawable in 60 fps
I drew dozens of cubes inside my Display.
I added a KeyListener to my panel, if I pressed the alt key, the program will run the following method :
public static void exportImage() {
int[] bb = new int[Constants.PanelSize.width*Constants.PanelSize.height*4];
IntBuffer ib = IntBuffer.wrap(bb);
ib.position(0);
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0,0,Constants.PanelSize.width,Constants.PanelSize.height,GL2.GL_RGBA,GL2.GL_UNSIGNED_BYTE,ib);
System.out.println(Constants.gl.glGetError());
ImageExport.savePixelsToPNG(bb,Constants.PanelSize.width,Constants.PanelSize.height, "imageFilename.png");
}
// Constant is a class which I store all my global variables in static type
The output in the console was 0, which means no errors. I printed the contents in the buffer and they were all zeros.
I checked the output file and it was only 1kB.
What should I do? Are there any good suggestions for me to export the screen contents to an image file using OpenGL? I heard that there are several libraries available but I don't know which one is suitable. Any help is appreciated T_T (plz forgive me if I have any grammatical mistakes ... )
You can do something like this, supposing you are drawing to the default framebuffer:
protected void saveImage(GL4 gl4, int width, int height) {
try {
GL4 gl4 = GLContext.getCurrentGL().getGL4();
BufferedImage screenshot = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Graphics graphics = screenshot.getGraphics();
ByteBuffer buffer = GLBuffers.newDirectByteBuffer(width * height * 4);
gl4.glReadBuffer(GL_BACK);
gl4.glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for (int h = 0; h < height; h++) {
for (int w = 0; w < width; w++) {
graphics.setColor(new Color((buffer.get() & 0xff), (buffer.get() & 0xff),
(buffer.get() & 0xff)));
buffer.get();
graphics.drawRect(w, height - h, 1, 1);
}
}
BufferUtils.destroyDirectBuffer(buffer);
File outputfile = new File("D:\\Downloads\\texture.png");
ImageIO.write(screenshot, "png", outputfile);
} catch (IOException ex) {
Logger.getLogger(EC_DepthPeeling.class.getName()).log(Level.SEVERE, null, ex);
}
}
Essentially you create a bufferedImage and a direct buffer. Then you use Graphics to render the content of the back buffer pixel by pixel to the bufferedImage.
You need an additional buffer.get(); because that represents the alpha value and you need also height - h to flip the image.
Edit: of course you need to read it when there is what you are looking for.
You have several options:
trigger a boolean variable and call it directly from the display method, at the end, when everything you wanted has been rendered
disable the automatic buffer swapping, call from the key listener the display() method, read the back buffer and enable the swapping again
call from the key listener the same code you would call in the display
You could use Robot class to take screenshot:
BufferedImage screenshot = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
ImageIO.write(screenshot, "png", new File("screenshot.png"));
There are two things to consider:
You take screenshot from screen, you could determine where the cordinates of you viewport are, so you can catch only the part of interest.
Something can reside a top of you viewport(another window), so the viewport could be hided by another window, it is unlikely to occur, but it can.
When you use buffers with LWJGL, they almost always need to be directly allocated. The OpenGL library doesn't really understand how to interface with Java Arrays™, and in order for the underlying memory operations to work, they need to be applied on natively-allocated (or, in this context, directly allocated) memory.
If you're using LWJGL 3.x, that's pretty simple:
//Check the math, because for an image array, given that Ints are 4 bytes, I think you can just allocate without multiplying by 4.
IntBuffer ib = org.lwjgl.BufferUtils.createIntBuffer(Constants.PanelSize.width * Constants.PanelSize.height);
And if that function isn't available, this should suffice:
//Here you actually *do* have to multiply by 4.
IntBuffer ib = java.nio.ByteBuffer.allocateDirect(Constants.PanelSize.width * Constants.PanelSize.height * 4).asIntBuffer();
And then you do your normal code:
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0, 0, Constants.PanelSize.width, Constants.PanelSize.height, GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, ib);
System.out.println(Constants.gl.glGetError());
int[] bb = new int[Constants.PanelSize.width * Constants.PanelSize.height];
ib.get(bb); //Stores the contents of the buffer into the int array.
ImageExport.savePixelsToPNG(bb, Constants.PanelSize.width, Constants.PanelSize.height, "imageFilename.png");
I am working on a module in which I have to make background of bitmap image transparent. Actually, I am making an app like "Stick it" through which we can make sticker out of any image. I don't know from where to begin.
Can someone give me a link or a hint for it?
Original Image-
After making background transparent-
This is what I want.
I can only provide some hints on how to approach your problem. You need to do a Image Segmenation. This can be achived via the k-means algotithm or similar clustering algorithms. See this for algorithms on image segmantation via clustering and this for a Java Code example. The computation of the clustering can be very time consumeing on a mobile device. Once you have the clustering you can use this approach to distinguish between the background and the foreground. In general all you picture should have a bachground color which differs strongly from the foreground otherwise it is not possible for the clustering to distunguish between them. It can also happen that a pixel inside of you foreground is assigned to the cluster of the background beacuase it has a similar color like your background. To prevent this from happening you could use this approach or a region grwoth algorithm. Afterward you can let you user select the clusters via touch and remove them. I also had the same problems with my Android App. This will give you a good start and once you have implemented the custering you just need to tewak the k parameter of the clustering to get good results.
Seems like a daunting task. If you are talking about image processing if I may understand then you can try https://developers.google.com/appengine/docs/java/images/
Also if you want to mask the entire background ( I have not tried Stick it) the application needs to understand the background image map. Please provide some examples so that I can come up with more definitive answers
One possibility would be to utilize the floodfill operation in the openCV library. There are lots of examples and tutorials on how to do similar stuff to what you want and OpenCV has been ported to Android. The relevant terms to Google are of course "openCV" and "floodfill".
For this kind of task(and app) you'll have to use openGL. Usually when working on openGL you based your fragment shader on modules you build in Matlab. Once you have the fragment shader it's quite easy to apply it on image. check this guide how to do it.
Here's a link to remove background from image in MatLab
I'm not fully familiar with matlab and if it can generate GLSL code by itself(the fragment shader). But even if it doesn't - you might want to learn GLSL yourself because frankly - you are trying to build a graphics app and Android SDK is somehow short when using it for images manipulation, and most important is that without a strong hardware-acceleration engine behind it - I cannot see it works smooth enough.
Once you'll have the figure image - you can apply it on transparent background easily like this:
Canvas canvas = new Canvas(canvasBitmap);
canvas.drawColor(Color.TRANSPARENT);
BitmapDrawable bd = (BitmapDrawable) getResources().getDrawable(R.drawable.loading);
Bitmap yourBitmap = bd.getBitmap();
Paint paint = new Paint();
canvas.drawBitmap(yourBitmap, 0, 0, paint);
Bitmap newBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(),image.getConfig());
Canvas canvas = new Canvas(newBitmap);
canvas.drawColor(Color.TRANSPARENT);
canvas.drawBitmap(image, 0, 0, null);
OR
See this
hope this wll helps you
if you are working in Android you might need a Buffer to get the pixels from the image - it's a intBuffer and it reduces the memory usage enormously... to get data from and stor data into the Buffer you have three methods (you can skip that part if you don't have 'large' images):
private IntBuffer buffer;
public void copyImageIntoBuffer(File imgSource) {
final Bitmap temp = BitmapFactory.decodeFile(imgSource
.getAbsolutePath());
buffer.rewind();
temp.copyPixelsToBuffer(buffer);
}
protected void copyBufferIntoImage(File tempFile) throws IOException {
buffer.rewind();
Bitmap temp = Bitmap.createBitmap(imgWidth, imgHeight,
Config.ARGB_8888);
temp.copyPixelsFromBuffer(buffer);
FileOutputStream out = new FileOutputStream(tempFile);
temp.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
}
public void mapBuffer(final File tempFile, long size) throws IOException {
RandomAccessFile aFile = new RandomAccessFile(tempFile, "rw");
aFile.setLength(4 * size); // 4 byte pro int
FileChannel fc = aFile.getChannel();
buffer = fc.map(FileChannel.MapMode.READ_WRITE, 0, fc.size())
.asIntBuffer();
}
now you can use the Buffer to get the pixels and modify them as desired... (i've copyied a code snipped that used a Progress bar on my UI and therefore needed a Handler/ProgressBar... when i did this i was working on bigger images and implemented a imageFilter (Gauss-Filter,Grey-Filter, etc.... just delete what is not needed)
public void run(final ProgressBar bar, IntBuffer buffer, Handler mHandler, int imgWidth, int imgHeight, int transparentColor ) {
for (int dy = 0; dy < imgHeight; dy ++){
final int progress = (dy*100)/imgHeight;
for (int dx = 0; dx < imgWidth; dx ++ ){
int px = buffer.get();
//int a = (0xFF000000 & px);
//int r = (0x00FF0000 & px) >> 16;
//int g = (0x0000FF00 & px) >> 8;
//int b = (0x000000FF & px);
//this makes the color transparent
if (px == transparentColor) {
px = px | 0xFF000000;
}
//r = mid << 16;
//g = mid << 8;
//b = mid;
//int col = a | r | g | b;
int pos = buffer.position();
buffer.put(pos-1, px);
}
// Update the progress bar
mHandler.post(new Runnable() {
public void run() {
bar.setProgress(progress);
}
});
}
}
if you really have small images, you can get the pixels directly during onCreate() or even better create a Buffer (maybe a HashMap or a List) before you start the Activity...
I am a relative newbie in game programming. I know how to draw pixels to a BufferedImage using setPixel(). It is horribly slow on larger formats so I moved on and found VolatileImage (took me a week or so). It is fairly easy to draw lines, strings, rects, etc but I can't draw individual pixels. I already tried using drawLine(x,y,x,y) but I get 3-4 FPS on an 800x600 image.
The fact that java didn't include setPixel() or setRGB() in the VolatileImage makes me pretty angry and confused.
I have 4 questions:
Is there a way to draw individual pixels on a VolatileImage? (on 1440x900 formats with FPS > 40)
Can I draw pixels in a BufferedImage with a faster method? (same 1440x900, FPS > 40)
Is there any other way to draw pixels fast enough for 3D games?
Can I make my BufferedImage hardware accelerated( tried using setAccelerationPriority(1F) but it doesn't work)
Please if you have any idea tell me. I can't continue making my game wihout this information. I already made 3D rendering algorithms but i need to be able to draw fast pixels. I have got a good feeling about this game.
Here's the code if it can help you help me:
public static void drawImageRendered (int x, int y, int w, int h) { // This is just a method to test the performance
int a[] = new int[3]; // The array containing R, G and B value for each pixel
bImg = Launcher.contObj.getGraphicsConfiguration().createCompatibleImage(800, 600); // Creates a compatible image for the JPanel object i am working with (800x600)
bImg.setAccelerationPriority(1F); // I am trying to get this image accelerated
WritableRaster wr = bImg.getRaster(); // The image's writable raster
for (int i = 0; i < bImg.getWidth(); i++) {
for (int j = 0; j < bImg.getHeight(); j++) {
a[0] = i % 256;
a[2] = j % 256;
a[1] = (j * i) % 256;
wr.setPixel(i, j, a); // Sets the pixels (You get a nice pattern)
}
}
g.drawImage(bImg, x, y, w, h, null);
}
I would much prefer not using OpenGL or any other external libraries, just plain Java.
Well you're basically drawing one pixel after the other using the CPU. There's no way that this can be accelerated, thus such a method does simply not make any sense for a VolatileImage. The low FPS you get suggest that this even causes a significant overhead, as each pixel drawing operation is sent to the graphics card (with information such as location & colour), which takes longer than to modify 3 or 4 bytes of RAM.
I suggest to either stop drawing each pixel separately or to figure out a way to make your drawing algorithm run directly on the graphics card (which most likely requires another language than Java).
It's been over 4 years since this post got an answer. I was looking for an answer to this question as well and stumbled on this post. After some more searching, I got it to work. Below I'll post the source to rendering pixels with a VolatileImage.
It seems Java hides our ability to plot pixels directly to a VolatileImage, but we can draw buffered images to it. For good reason. Using the software to plot a pixel doesn't really help with acceleration(in Java it seems). If you can plot pixels to a BufferedImage, and then render it on a VolatileImage, you may get a speed bonus since it's hardware accelerated from that point.
The source down below is a self-contained example. You can copy-pasta practically all of it to your project and run it.
https://github.com/Miekpeeps/JavaSnippets-repo/blob/master/src/graphics_rendering/pixels_03/PlottingVolatile.java
In the constructor I save the Graphics environment of the app/game.
private GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
private GraphicsConfiguration gc = ge.getDefaultScreenDevice().getDefaultConfiguration();
Then, when I call a method to enable hardware we create a buffer. I set the transparency to Opaque. In my little engine, I deal with transparency/alpha blending on another thread in the pipeline.
public void setHardwareAcceleration(Boolean hw)
{
useHW = hw;
if (hw)
{
vbuffer = gc.createCompatibleVolatileImage(width, height, Transparency.OPAQUE);
System.setProperty("sun.java2d.opengl", hw.toString()); // may not be needed.
}
}
For each frame I update, I get the Graphics from the VolatileImage and render my buffer there. Nothing gets rendered if I dont flush().
#Override
public void paintComponent(Graphics g)
{
if(useHW)
{
g = vbuffer.getGraphics();
g.drawImage(buffer, 0, 0, null);
vbuffer.flush();
}
else
{
g.drawImage(buffer, 0, 0, null);
buffer.flush();
}
}
There is still a little bit of overhead when calling to plot a pixel on the BufferedImage writable raster. But when we update the screen, we get a speed boost when using the Volatile image instead of using the Buffered image.
Hope this helps some folks out. Cheers.
I'm currently trying to generate a textured polygonal surface using JOGL, and I'm getting an error message I don't understand. Eclipse tells me "java.lang.IndexOutOfBoundsException: Required 430233 remaining bytes in buffer, only had 428349". As far as I can see, the buffered image being generated by the readTexture method is not of sufficient size to use with the glTex2D() method. However, I'm not sure how to go about resolving the issue. The relevant sections of code are below, and any help would be much appreciated.
public void init(GLAutoDrawable drawable)
{
final GL2 gl = drawable.getGL().getGL2();
GLU glu = GLU.createGLU();
//Create the glu object which allows access to the GLU library\
gl.glShadeModel(GL2.GL_SMOOTH); // Enable Smooth Shading
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
gl.glClearDepth(1.0f); // Depth Buffer Setup
gl.glEnable(GL.GL_DEPTH_TEST); // Enables Depth Testing
gl.glDepthFunc(GL.GL_LEQUAL); // The Type Of Depth Testing To Do
gl.glEnable(GL.GL_TEXTURE_2D);
texture = genTexture(gl);
gl.glBindTexture(GL.GL_TEXTURE_2D, texture);
TextureReader.Texture texture = null;
try {
texture = TextureReader.readTexture ("/C:/Users/Alex/Desktop/boy_reaching_up_for_goalpost_stencil.png");
} catch (IOException e) {
e.printStackTrace();
throw new RuntimeException(e);
}
makeRGBTexture(gl, glu, texture, GL.GL_TEXTURE_2D, false);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
}
private void makeRGBTexture(GL gl, GLU glu, TextureReader.Texture img,
int target, boolean mipmapped) {
if (mipmapped) {
glu.gluBuild2DMipmaps(target, GL.GL_RGB8, img.getWidth(),
img.getHeight(), GL.GL_RGB, GL.GL_UNSIGNED_BYTE, img.getPixels());
} else {
gl.glTexImage2D(target, 0, GL.GL_RGB, img.getWidth(),
img.getHeight(), 0, GL.GL_RGB, GL.GL_UNSIGNED_BYTE, img.getPixels());
}
}
private int genTexture(GL gl) {
final int[] tmp = new int[1];
gl.glGenTextures(1, tmp, 0);
return tmp[0];
}
//Within the TextureReader class
public static Texture readTexture(String filename, boolean storeAlphaChannel)
throws IOException {
BufferedImage bufferedImage;
if (filename.endsWith(".bmp")) {
bufferedImage = BitmapLoader.loadBitmap(filename);
} else {
bufferedImage = readImage(filename);
}
return readPixels(bufferedImage, storeAlphaChannel);
}
The error is being generated by the call to glTexImage2D() inside the makeRGBTexture() method.
By default, the GL expects that each line of an image starts at an memory address divisiable by 4 (4 byte alignment). With RGBA images, this is always the case (als long as the first pixel is correctly aligned). But with RGB images, this will only be the case when the width is divisable by 4, too. Note that this is totally unrelated to the "power of two" requirements of very old GPUs.
Woth your particular image resolutiion of 227x629, you get 681 bytes per line, so the GL expetcs 3 additional bytes per line. For 629 lines, this makes 1887 extra bytes. If you look at those numbers, you can see that the buffer is just 1884 bytes to small. The difference of 3 is just due to the fact that we do not need the 3 padding bytes at the end of the last line, since there is no next line to be started, and the GL won't read beyond that end of that data.
So you have two options here: align the image data to the way expect them (that is, pad every line with some extra bytes), or - the simpler approach from the user's point of view - just tell the GL that your data is tightly packed (1 byte alignment) by calling glPixelStorei(GL_UNPACK_ALIGNMENT,1) before you specify the image data.