I am a newbie in OpenGL programming. I am making a java program with OpenGL. I drew many cubes inside. I now wanted to implement a screenshot function in my program but I just couldn't make it work. The situation is as follow :
I used FPSanimator to refresh my drawable in 60 fps
I drew dozens of cubes inside my Display.
I added a KeyListener to my panel, if I pressed the alt key, the program will run the following method :
public static void exportImage() {
int[] bb = new int[Constants.PanelSize.width*Constants.PanelSize.height*4];
IntBuffer ib = IntBuffer.wrap(bb);
ib.position(0);
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0,0,Constants.PanelSize.width,Constants.PanelSize.height,GL2.GL_RGBA,GL2.GL_UNSIGNED_BYTE,ib);
System.out.println(Constants.gl.glGetError());
ImageExport.savePixelsToPNG(bb,Constants.PanelSize.width,Constants.PanelSize.height, "imageFilename.png");
}
// Constant is a class which I store all my global variables in static type
The output in the console was 0, which means no errors. I printed the contents in the buffer and they were all zeros.
I checked the output file and it was only 1kB.
What should I do? Are there any good suggestions for me to export the screen contents to an image file using OpenGL? I heard that there are several libraries available but I don't know which one is suitable. Any help is appreciated T_T (plz forgive me if I have any grammatical mistakes ... )
You can do something like this, supposing you are drawing to the default framebuffer:
protected void saveImage(GL4 gl4, int width, int height) {
try {
GL4 gl4 = GLContext.getCurrentGL().getGL4();
BufferedImage screenshot = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Graphics graphics = screenshot.getGraphics();
ByteBuffer buffer = GLBuffers.newDirectByteBuffer(width * height * 4);
gl4.glReadBuffer(GL_BACK);
gl4.glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
for (int h = 0; h < height; h++) {
for (int w = 0; w < width; w++) {
graphics.setColor(new Color((buffer.get() & 0xff), (buffer.get() & 0xff),
(buffer.get() & 0xff)));
buffer.get();
graphics.drawRect(w, height - h, 1, 1);
}
}
BufferUtils.destroyDirectBuffer(buffer);
File outputfile = new File("D:\\Downloads\\texture.png");
ImageIO.write(screenshot, "png", outputfile);
} catch (IOException ex) {
Logger.getLogger(EC_DepthPeeling.class.getName()).log(Level.SEVERE, null, ex);
}
}
Essentially you create a bufferedImage and a direct buffer. Then you use Graphics to render the content of the back buffer pixel by pixel to the bufferedImage.
You need an additional buffer.get(); because that represents the alpha value and you need also height - h to flip the image.
Edit: of course you need to read it when there is what you are looking for.
You have several options:
trigger a boolean variable and call it directly from the display method, at the end, when everything you wanted has been rendered
disable the automatic buffer swapping, call from the key listener the display() method, read the back buffer and enable the swapping again
call from the key listener the same code you would call in the display
You could use Robot class to take screenshot:
BufferedImage screenshot = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
ImageIO.write(screenshot, "png", new File("screenshot.png"));
There are two things to consider:
You take screenshot from screen, you could determine where the cordinates of you viewport are, so you can catch only the part of interest.
Something can reside a top of you viewport(another window), so the viewport could be hided by another window, it is unlikely to occur, but it can.
When you use buffers with LWJGL, they almost always need to be directly allocated. The OpenGL library doesn't really understand how to interface with Java Arrays™, and in order for the underlying memory operations to work, they need to be applied on natively-allocated (or, in this context, directly allocated) memory.
If you're using LWJGL 3.x, that's pretty simple:
//Check the math, because for an image array, given that Ints are 4 bytes, I think you can just allocate without multiplying by 4.
IntBuffer ib = org.lwjgl.BufferUtils.createIntBuffer(Constants.PanelSize.width * Constants.PanelSize.height);
And if that function isn't available, this should suffice:
//Here you actually *do* have to multiply by 4.
IntBuffer ib = java.nio.ByteBuffer.allocateDirect(Constants.PanelSize.width * Constants.PanelSize.height * 4).asIntBuffer();
And then you do your normal code:
Constants.gl.glPixelStorei(GL2.GL_UNPACK_ALIGNMENT, 1);
Constants.gl.glReadPixels(0, 0, Constants.PanelSize.width, Constants.PanelSize.height, GL2.GL_RGBA, GL2.GL_UNSIGNED_BYTE, ib);
System.out.println(Constants.gl.glGetError());
int[] bb = new int[Constants.PanelSize.width * Constants.PanelSize.height];
ib.get(bb); //Stores the contents of the buffer into the int array.
ImageExport.savePixelsToPNG(bb, Constants.PanelSize.width, Constants.PanelSize.height, "imageFilename.png");
Related
What I want to do is very simple : I have an image which is basically a single color image with alpha.
EDIT : since my post has been light speed tagged as duplicate, here are the main differences with the other post :
The other topic's image has several colors, mine has only one
The owner accepted answer is the one I implemented... and I'm saying I have an issue with it because it's too slow
It means that all pixels are either :
White (255;255;255) and Transparent (0 alpha)
or Black (0;0;0) and Opaque (alpha between 0 and 255)
Alpha is not always the same, I have different shades of black.
My image is produced with Photoshop with the following settings :
Mode : RGB color, 8 Bits/Channel
Format : PNG
Compression : Smallest / Slow
Interlace : None
Considering it has only one color I suppose I could use other modes (such as Grayscale maybe ?) so tell me if you have suggestions about that.
What I want to do is REPLACE the Black color by another color in my java application.
Reading the other topics it seems that changing the ColorModel is the good thing to do, but I'm honestly totally lost on how to do it correctly.
Here is what I did :
public Test() {
BufferedImage image, newImage;
IndexColorModel newModel;
try {
image = ImageIO.read(new File("E:\\MyImage.png"));
}
catch (IOException e) {e.printStackTrace();}
newModel = createColorModel();
newRaster = newModel.createCompatibleWritableRaster(image.getWidth(), image.getHeight());
newImage = new BufferedImage(newModel, newRaster, false, null);
newImage.getGraphics().drawImage(image, 0, 0, null);
}
private IndexColorModel createColorModel() {
int size = 256;
byte[] r = new byte[size];
byte[] g = new byte[size];
byte[] b = new byte[size];
byte[] a = new byte[size];
for (int i = 0; i < size; i++) {
r[i] = (byte) 21;
g[i] = (byte) 0;
b[i] = (byte) 149;
a[i] = (byte) i;
}
return new IndexColorModel(16, size, r, g, b, a);
}
This produces the expected result.
newImage is the same than image with the new color (21;0;149).
But there is a flaw : the following line is too slow :
newImage.getGraphics().drawImage(image, 0, 0, null);
Doing this on a big image can take up to a few seconds, while I need this to be instantaneous.
I'm pretty sure I'm not doing this the good way.
Could you tell me how achieve this goal efficiently ?
Please consider that my image will always be single color with alpha, so suggestions about image format are welcomed.
If your input image always in palette + alpha mode, and using an IndexColorModel compatible with the one from your createColorModel(), you should be able to do:
BufferedImage image ...; // As before
IndexColorModel newModel = createColorModel();
BufferedImage newImage = new BufferedImage(newModel, image.getRaster(), newModel.isAlphaPremultiplied(), null);
This will create a new image, with the new color model, but re-using the raster from the original. No creating of large data arrays or drawing is required, thus it should be very fast. But: Note that image and newImage will now share backing buffer, so any changes drawn in one of them, will be reflected on the other (probably not a problem in your case).
PS: You might need to tweak your createColorModel() method to get the exact result you want, but as I don't have your input file, I can't very if it works or not.
I am working on a module in which I have to make background of bitmap image transparent. Actually, I am making an app like "Stick it" through which we can make sticker out of any image. I don't know from where to begin.
Can someone give me a link or a hint for it?
Original Image-
After making background transparent-
This is what I want.
I can only provide some hints on how to approach your problem. You need to do a Image Segmenation. This can be achived via the k-means algotithm or similar clustering algorithms. See this for algorithms on image segmantation via clustering and this for a Java Code example. The computation of the clustering can be very time consumeing on a mobile device. Once you have the clustering you can use this approach to distinguish between the background and the foreground. In general all you picture should have a bachground color which differs strongly from the foreground otherwise it is not possible for the clustering to distunguish between them. It can also happen that a pixel inside of you foreground is assigned to the cluster of the background beacuase it has a similar color like your background. To prevent this from happening you could use this approach or a region grwoth algorithm. Afterward you can let you user select the clusters via touch and remove them. I also had the same problems with my Android App. This will give you a good start and once you have implemented the custering you just need to tewak the k parameter of the clustering to get good results.
Seems like a daunting task. If you are talking about image processing if I may understand then you can try https://developers.google.com/appengine/docs/java/images/
Also if you want to mask the entire background ( I have not tried Stick it) the application needs to understand the background image map. Please provide some examples so that I can come up with more definitive answers
One possibility would be to utilize the floodfill operation in the openCV library. There are lots of examples and tutorials on how to do similar stuff to what you want and OpenCV has been ported to Android. The relevant terms to Google are of course "openCV" and "floodfill".
For this kind of task(and app) you'll have to use openGL. Usually when working on openGL you based your fragment shader on modules you build in Matlab. Once you have the fragment shader it's quite easy to apply it on image. check this guide how to do it.
Here's a link to remove background from image in MatLab
I'm not fully familiar with matlab and if it can generate GLSL code by itself(the fragment shader). But even if it doesn't - you might want to learn GLSL yourself because frankly - you are trying to build a graphics app and Android SDK is somehow short when using it for images manipulation, and most important is that without a strong hardware-acceleration engine behind it - I cannot see it works smooth enough.
Once you'll have the figure image - you can apply it on transparent background easily like this:
Canvas canvas = new Canvas(canvasBitmap);
canvas.drawColor(Color.TRANSPARENT);
BitmapDrawable bd = (BitmapDrawable) getResources().getDrawable(R.drawable.loading);
Bitmap yourBitmap = bd.getBitmap();
Paint paint = new Paint();
canvas.drawBitmap(yourBitmap, 0, 0, paint);
Bitmap newBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(),image.getConfig());
Canvas canvas = new Canvas(newBitmap);
canvas.drawColor(Color.TRANSPARENT);
canvas.drawBitmap(image, 0, 0, null);
OR
See this
hope this wll helps you
if you are working in Android you might need a Buffer to get the pixels from the image - it's a intBuffer and it reduces the memory usage enormously... to get data from and stor data into the Buffer you have three methods (you can skip that part if you don't have 'large' images):
private IntBuffer buffer;
public void copyImageIntoBuffer(File imgSource) {
final Bitmap temp = BitmapFactory.decodeFile(imgSource
.getAbsolutePath());
buffer.rewind();
temp.copyPixelsToBuffer(buffer);
}
protected void copyBufferIntoImage(File tempFile) throws IOException {
buffer.rewind();
Bitmap temp = Bitmap.createBitmap(imgWidth, imgHeight,
Config.ARGB_8888);
temp.copyPixelsFromBuffer(buffer);
FileOutputStream out = new FileOutputStream(tempFile);
temp.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
}
public void mapBuffer(final File tempFile, long size) throws IOException {
RandomAccessFile aFile = new RandomAccessFile(tempFile, "rw");
aFile.setLength(4 * size); // 4 byte pro int
FileChannel fc = aFile.getChannel();
buffer = fc.map(FileChannel.MapMode.READ_WRITE, 0, fc.size())
.asIntBuffer();
}
now you can use the Buffer to get the pixels and modify them as desired... (i've copyied a code snipped that used a Progress bar on my UI and therefore needed a Handler/ProgressBar... when i did this i was working on bigger images and implemented a imageFilter (Gauss-Filter,Grey-Filter, etc.... just delete what is not needed)
public void run(final ProgressBar bar, IntBuffer buffer, Handler mHandler, int imgWidth, int imgHeight, int transparentColor ) {
for (int dy = 0; dy < imgHeight; dy ++){
final int progress = (dy*100)/imgHeight;
for (int dx = 0; dx < imgWidth; dx ++ ){
int px = buffer.get();
//int a = (0xFF000000 & px);
//int r = (0x00FF0000 & px) >> 16;
//int g = (0x0000FF00 & px) >> 8;
//int b = (0x000000FF & px);
//this makes the color transparent
if (px == transparentColor) {
px = px | 0xFF000000;
}
//r = mid << 16;
//g = mid << 8;
//b = mid;
//int col = a | r | g | b;
int pos = buffer.position();
buffer.put(pos-1, px);
}
// Update the progress bar
mHandler.post(new Runnable() {
public void run() {
bar.setProgress(progress);
}
});
}
}
if you really have small images, you can get the pixels directly during onCreate() or even better create a Buffer (maybe a HashMap or a List) before you start the Activity...
I am a relative newbie in game programming. I know how to draw pixels to a BufferedImage using setPixel(). It is horribly slow on larger formats so I moved on and found VolatileImage (took me a week or so). It is fairly easy to draw lines, strings, rects, etc but I can't draw individual pixels. I already tried using drawLine(x,y,x,y) but I get 3-4 FPS on an 800x600 image.
The fact that java didn't include setPixel() or setRGB() in the VolatileImage makes me pretty angry and confused.
I have 4 questions:
Is there a way to draw individual pixels on a VolatileImage? (on 1440x900 formats with FPS > 40)
Can I draw pixels in a BufferedImage with a faster method? (same 1440x900, FPS > 40)
Is there any other way to draw pixels fast enough for 3D games?
Can I make my BufferedImage hardware accelerated( tried using setAccelerationPriority(1F) but it doesn't work)
Please if you have any idea tell me. I can't continue making my game wihout this information. I already made 3D rendering algorithms but i need to be able to draw fast pixels. I have got a good feeling about this game.
Here's the code if it can help you help me:
public static void drawImageRendered (int x, int y, int w, int h) { // This is just a method to test the performance
int a[] = new int[3]; // The array containing R, G and B value for each pixel
bImg = Launcher.contObj.getGraphicsConfiguration().createCompatibleImage(800, 600); // Creates a compatible image for the JPanel object i am working with (800x600)
bImg.setAccelerationPriority(1F); // I am trying to get this image accelerated
WritableRaster wr = bImg.getRaster(); // The image's writable raster
for (int i = 0; i < bImg.getWidth(); i++) {
for (int j = 0; j < bImg.getHeight(); j++) {
a[0] = i % 256;
a[2] = j % 256;
a[1] = (j * i) % 256;
wr.setPixel(i, j, a); // Sets the pixels (You get a nice pattern)
}
}
g.drawImage(bImg, x, y, w, h, null);
}
I would much prefer not using OpenGL or any other external libraries, just plain Java.
Well you're basically drawing one pixel after the other using the CPU. There's no way that this can be accelerated, thus such a method does simply not make any sense for a VolatileImage. The low FPS you get suggest that this even causes a significant overhead, as each pixel drawing operation is sent to the graphics card (with information such as location & colour), which takes longer than to modify 3 or 4 bytes of RAM.
I suggest to either stop drawing each pixel separately or to figure out a way to make your drawing algorithm run directly on the graphics card (which most likely requires another language than Java).
It's been over 4 years since this post got an answer. I was looking for an answer to this question as well and stumbled on this post. After some more searching, I got it to work. Below I'll post the source to rendering pixels with a VolatileImage.
It seems Java hides our ability to plot pixels directly to a VolatileImage, but we can draw buffered images to it. For good reason. Using the software to plot a pixel doesn't really help with acceleration(in Java it seems). If you can plot pixels to a BufferedImage, and then render it on a VolatileImage, you may get a speed bonus since it's hardware accelerated from that point.
The source down below is a self-contained example. You can copy-pasta practically all of it to your project and run it.
https://github.com/Miekpeeps/JavaSnippets-repo/blob/master/src/graphics_rendering/pixels_03/PlottingVolatile.java
In the constructor I save the Graphics environment of the app/game.
private GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
private GraphicsConfiguration gc = ge.getDefaultScreenDevice().getDefaultConfiguration();
Then, when I call a method to enable hardware we create a buffer. I set the transparency to Opaque. In my little engine, I deal with transparency/alpha blending on another thread in the pipeline.
public void setHardwareAcceleration(Boolean hw)
{
useHW = hw;
if (hw)
{
vbuffer = gc.createCompatibleVolatileImage(width, height, Transparency.OPAQUE);
System.setProperty("sun.java2d.opengl", hw.toString()); // may not be needed.
}
}
For each frame I update, I get the Graphics from the VolatileImage and render my buffer there. Nothing gets rendered if I dont flush().
#Override
public void paintComponent(Graphics g)
{
if(useHW)
{
g = vbuffer.getGraphics();
g.drawImage(buffer, 0, 0, null);
vbuffer.flush();
}
else
{
g.drawImage(buffer, 0, 0, null);
buffer.flush();
}
}
There is still a little bit of overhead when calling to plot a pixel on the BufferedImage writable raster. But when we update the screen, we get a speed boost when using the Volatile image instead of using the Buffered image.
Hope this helps some folks out. Cheers.
I have a method converting BufferedImages who's type is TYPE_CUSTOM to TYPE_INT_RGB. I am using the following code, however I would really like to find a faster way of doing this.
BufferedImage newImg = new BufferedImage(
src.getWidth(),
src.getHeight(),
BufferedImage.TYPE_INT_RGB);
ColorConvertOp op = new ColorConvertOp(null);
op.filter(src, newImg);
It works fine, however it's quite slow and I am wondering if there is a faster way to do this conversion.
ColorModel Before Conversion:
ColorModel: #pixelBits = 24 numComponents = 3 color space = java.awt.color.ICC_ColorSpace#1c92586f transparency = 1 has alpha = false isAlphaPre = false
ColorModel After Conversion:
DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=0
Thanks!
Update:
Turns out working with the raw pixel data was the best way. Since the TYPE_CUSTOM was actually RGB converting it manually is simple and is about 95% faster than ColorConvertOp.
public static BufferedImage makeCompatible(BufferedImage img) throws IOException {
// Allocate the new image
BufferedImage dstImage = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
// Check if the ColorSpace is RGB and the TransferType is BYTE.
// Otherwise this fast method does not work as expected
ColorModel cm = img.getColorModel();
if ( cm.getColorSpace().getType() == ColorSpace.TYPE_RGB && img.getRaster().getTransferType() == DataBuffer.TYPE_BYTE ) {
//Allocate arrays
int len = img.getWidth()*img.getHeight();
byte[] src = new byte[len*3];
int[] dst = new int[len];
// Read the src image data into the array
img.getRaster().getDataElements(0, 0, img.getWidth(), img.getHeight(), src);
// Convert to INT_RGB
int j = 0;
for ( int i=0; i<len; i++ ) {
dst[i] = (((int)src[j++] & 0xFF) << 16) |
(((int)src[j++] & 0xFF) << 8) |
(((int)src[j++] & 0xFF));
}
// Set the dst image data
dstImage.getRaster().setDataElements(0, 0, img.getWidth(), img.getHeight(), dst);
return dstImage;
}
ColorConvertOp op = new ColorConvertOp(null);
op.filter(img, dstImage);
return dstImage;
}
BufferedImages are painfully slow. I got a solution but I'm not sure you will like it. The fastest way to process and convert buffered images is to extract the raw data array from inside the BufferedImage. You do that by calling buffImg.getRaster() and converting it into the specific raster. Then call raster.getDataStorage(). Once you have access to the raw data it is possible to write fast image processing code without all the abstraction in BufferedImages slowing it down. This technique also requires an in depth understanding of image formats and some reverse engineering on your part. This is the only way I have been able to get image processing code to run fast enough for my applications.
Example:
ByteInterleavedRaster srcRaster = (ByteInterleavedRaster)src.getRaster();
byte srcData[] = srcRaster.getDataStorage();
IntegerInterleavedRaster dstRaster = (IntegerInterleavedRaster)dst.getRaster();
int dstData[] = dstRaster.getDataStorage();
dstData[0] = srcData[0] << 16 | srcData[1] << 8 | srcData[2];
or something like that. Expect compiler errors warning you not to access low level rasters like that. The only place I have had issues with this technique is inside of applets where an access violation will occur.
I've found rendering using Graphics.drawImage() instead of ColorConvertOp 50 times faster. I can only assume that drawImage() is GPU accelerated.
i.e this is really slow, like 50ms a go for 100x200 rectangles
public void BufferdImage convert(BufferedImage input) {
BufferedImage output= new BufferedImage(input.getWidht(), input.getHeight(), BufferedImage.TYPE_BYTE_BINARY, CUSTOM_PALETTE);
ColorConvertOp op = new ColorConvertOp(input.getColorModel().getColorSpace(),
output.getColorModel().getColorSpace());
op.filter(input, output);
return output;
}
i.e however this registers < 1ms for same inputs
public void BufferdImage convert(BufferedImage input) {
BufferedImage output= new BufferedImage(input.getWidht(), input.getHeight(), BufferedImage.TYPE_BYTE_BINARY, CUSTOM_PALETTE);
Graphics graphics = output.getGraphics();
graphics.drawImage(input, 0, 0, null);
graphics.dispose();
return output;
}
Have you tried supplying any RenderingHints? No guarantees, but using
ColorConvertOp op = new ColorConvertOp(new RenderingHints(
RenderingHints.KEY_COLOR_RENDERING,
RenderingHints.VALUE_COLOR_RENDER_SPEED));
rather than the null in your code snippet might speed it up somewhat.
I suspect the problem might be that ColorConvertOp() works pixel-by-pixel (guaranteed to be "slow").
Q: Is it possible for you to use gc.createCompatibleImage()?
Q: Is your original bitmap true color, or does it use a colormap?
Q: Failing all else, would you be agreeable to writing a JNI interface? Either to your own, custom C code, or to an external library such as ImageMagick?
If you have JAI installed then you might try uninstalling it, if you can, or otherwise look for some way to disable codecLib when loading JPEG. In a past life I had similar issues (http://www.java.net/node/660804) and ColorConvertOp was the fastest at the time.
As I recall the fundamental problem is that Java2D is not at all optimized for TYPE_CUSTOM images in general. When you install JAI it comes with codecLib which has a decoder that returns TYPE_CUSTOM and gets used instead of the default. The JAI list may be able to provide more help, it's been several years.
maybe try this:
Bitmap source = Bitmap.create(width, height, RGB_565);//don't remember exactly...
Canvas c = new Canvas(source);
// then
c.draw(bitmap, 0, 0);
Then the source bitmap will be modified.
Later you can do:
onDraw(Canvas canvas){
canvas.draw(source, rectSrs,rectDestination, op);
}
if you can manage always reuse the bitmap so you be able to get better performance. As well you can use other canvas functions to draw your bitmap
I want to do a simple color to grayscale conversion using java.awt.image.BufferedImage. I'm a beginner in the field of image processing, so please forgive if I confused something.
My input image is an RGB 24-bit image (no alpha), I'd like to obtain a 8-bit grayscale BufferedImage on the output, which means I have a class like this (details omitted for clarity):
public class GrayscaleFilter {
private BufferedImage colorFrame;
private BufferedImage grayFrame =
new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
I've succesfully tried out 2 conversion methods until now, first being:
private BufferedImageOp grayscaleConv =
new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
protected void filter() {
grayscaleConv.filter(colorFrame, grayFrame);
}
And the second being:
protected void filter() {
WritableRaster raster = grayFrame.getRaster();
for(int x = 0; x < raster.getWidth(); x++) {
for(int y = 0; y < raster.getHeight(); y++){
int argb = colorFrame.getRGB(x,y);
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = (argb ) & 0xff;
int l = (int) (.299 * r + .587 * g + .114 * b);
raster.setSample(x, y, 0, l);
}
}
}
The first method works much faster but the image produced is very dark, which means I'm losing bandwidth which is unacceptable (there is some color conversion mapping used between grayscale and sRGB ColorModel called tosRGB8LUT which doesn't work well for me, as far as I can tell but I'm not sure, I just suppose those values are used). The second method works slower, but the effect is very nice.
Is there a method of combining those two, eg. using a custom indexed ColorSpace for ColorConvertOp? If yes, could you please give me an example?
Thanks in advance.
public BufferedImage getGrayScale(BufferedImage inputImage){
BufferedImage img = new BufferedImage(inputImage.getWidth(), inputImage.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img.getGraphics();
g.drawImage(inputImage, 0, 0, null);
g.dispose();
return img;
}
There's an example here which differs from your first example in one small aspect, the parameters to ColorConvertOp. Try this:
protected void filter() {
BufferedImageOp grayscaleConv =
new ColorConvertOp(colorFrame.getColorModel().getColorSpace(),
grayFrame.getColorModel().getColorSpace(), null);
grayscaleConv.filter(colorFrame, grayFrame);
}
Try modifying your second approach. Instead of working on a single pixel, retrieve an array of argb int values, convert that and set it back.
The second method is based on pixel's luminance therefore it obtains more favorable visual results. It could be sped a little bit by optimizing the expensive floating point arithmetic operation when calculate l using lookup array or hash table.
Here is a solution that has worked for me in some situations.
Take image height y, image width x, the image color depth m, and the integer bit size n. Only works if (2^m)/(x*y*2^n) >= 1.
Keep a n bit integer total for each color channel as you process the initial gray scale values. Divide each total by the (x*y) for the average value avr[channel] of each channel. Add (192 - avr[channel]) to each pixel for each channel.
Keep in mind that this approach probably won't have the same level of quality as standard luminance approaches, but if you're looking for a compromise between speed and quality, and don't want to deal with expensive floating point operations, it may work for you.