Is there a way to implement "Duotone" effect in Java?
A good example of what I'd like to do is here or here
I guess BandCombineOp might help.
To me I should convert it to grays first and then apply smth like threshold effect.
But I didn't manage to achieve good output.
Also I don't understand how can I set up colors for this effect.
float[][] grayMatrix = new float[][]
{
new float[] {0.3f, 0.3f, 0.3f},
new float[] {0.3f, 0.3f, 0.3f},
new float[] {0.3f, 0.3f, 0.3f},
};
float[][] duoToneMatrix = new float[][]
{
new float[] {0.1f, 0.1f, 0.1f},
new float[] {0.2f, 0.2f, 0.2f},
new float[] {0.1f, 0.1f, 0.1f},
};
BufferedImage src = ImageIO.read(new File("X:\\photoshop_image_effects.jpg"));
WritableRaster srcRaster = src.getRaster();
// make it gray
BandCombineOp bco = new BandCombineOp(grayMatrix, null);
WritableRaster dstRaster = bco.createCompatibleDestRaster(srcRaster);
bco.filter(srcRaster, dstRaster);
// apply duotone
BandCombineOp duoToneBco = new BandCombineOp(duoToneMatrix, null);
WritableRaster dstRaster2 = bco.createCompatibleDestRaster(dstRaster);
duoToneBco.filter(dstRaster, dstRaster2);
BufferedImage result = new BufferedImage(src.getColorModel(), dstRaster2, src.getColorModel().isAlphaPremultiplied(), null);
ImageIO.write(result, "png", new File("X:\\result_duotone.png"));
My output
From what I can tell you are trying to change the colouring of an image without changing its luminosity. Note the difference from luminance.
Regardless of whether you are aiming for luminance or luminosity, your problem boils down to varying the relative contributions of B, G, and R without changing their weighted sum. Your first matrix converts to greyscale by setting B,G,R to the same value and only slightly changing their luminance (.3+.3+.3 = .9). To use luminosity instead use
greyMatrix = (.11,.59,.3,
.11,.59,.3,
.11,.59,.3); //note this is for bgr
Then you want to change their relative weighting without changing their weighted sum. First, note that since after greyscale conversion your B,G,R values are the same you could replace your matrix with
duoToneMatrix = (0,.3,0,
0,.6,0,
0,.3,0,)
and it would be equivalent. To conserve luminance you need to choose 3 factors such that their sum is 1. Those three factors can be applied in the duoTone Matrix. The larger a factor is, the more tinted the image will be with that colour. To preserve luminosity you need 3 factors fb,fg,fr such that
fb*.11+fg*.59+fr*.3 = 1; //again for bgr
You can choose your factors fb, fg, fr to find the tint of your choosing.
Also, note that you can do this with one matrix. Just combine the two matrices you already have.
[duoToneMatrix]*[greyMatrix]*vector = ([duoToneMatrix]*[greyMatrix])*vector;
just compute the product of duoToneMatrix and greyMatrix (in that order) and process in one step.
Related
Let's say I have a BufferedImage with ARGB channels. I can turn this image into a grayscale image simply by doing
BufferedImage copy = new BufferedImage(image.getWidth(), image.getHeight(),
BufferedImage.TYPE_BYTE_GRAY);
Graphics g = copy.getGraphics().create();
g.drawImage(image, 0, 0, null);
g.dispose();
There are a couple of other methods to do grayscale conversion I'm aware of, but this one works good for my program. I can also (and do) enhance the contrast of the image by doing this:
RescaleOp op;
op = new RescaleOp(1.0f, darken, null);
op.filter(copy, copy);
op = new RescaleOp(brighten, 0.0f, null);
op.filter(copy, copy);
But there's a problem. Sometimes there are slightly dark-red parts of my image that I need to isolate, which are close to slightly bright regions, that is, regions with a high red value (such as yellow and purple). I need to isolate these red regions. How can I do this efficiently?
Manually, I would like something like
for each pixel p in original
new.p = grayscale(Math.max(Math.abs(p.red - p.green), Math.abs(p.red - p.blue)))
Can I do this more efficiently using built-in filters or the like? I'm not looking for an exact filter - just something to help me on the way of isolating these red areas a bit. This kind of code makes me think there's an efficient way: this is for producing lower-quality grayscale images, but it is very fast.
ImageFilter filter = new GrayFilter(true, 50);
ImageProducer producer = new FilteredImageSource(colorImage.getSource(), filter);
Image mage = this.createImage(producer);
Thanks for any help and suggestions!
I'm creating a google maps application on Android and I'm facing problem. I have elevation data in text format. It looks like this
longtitude latitude elevation
491222 163550 238.270000
491219 163551 242.130000
etc.
This elevation information is stored in grid of 10x10 meters. It means that for every 10 meters is an elevation value. This text is too large so that I could find there the information I need so I would want to create a bitmap with this information.
What I need to do is in certain moment to scan the elevation around my location. There can be a lot of points to be scanned so I want to make it quick. That's why I'm thinking about the bitmap.
I don't know if it's even possible but my idea is that there would be a bitmap of size of my text grid and in every pixel would be information about the elevation. So it should be like invisible map over the google map placed in the place according to coordinates and when I need to learn the elevation about my location, I would just look at these pixels and read the value of elevation.
Do you think that is possible to create such a bitmap? I have just this idea but no idea how to implement it. Eg how to store in it the elevation information, how to read that back, how to create the bitmap.. I would be very grateful for every advice, direction, source which you can give me. Thank you so much!
BufferedImage is not available in android but android.graphics.Bitmap can be used. Bitmap must be saved in lossless format (eg. PNG).
double[] elevations={238.27,242.1301,222,1};
int[] pixels = doublesToInts(elevations);
//encoding
Bitmap bmp=Bitmap.createBitmap(2, 2, Config.ARGB_8888);
bmp.setPixels(pixels, 0, 2, 0, 0, 2, 2);
File file=new File(getCacheDir(),"bitmap.png");
try {
FileOutputStream fos = new FileOutputStream(file);
bmp.compress(CompressFormat.PNG, 100, fos);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
//decoding
Bitmap out=BitmapFactory.decodeFile(file.getPath());
if (out!=null)
{
int [] outPixels=new int[out.getWidth()*out.getHeight()];
out.getPixels(outPixels, 0, out.getWidth(), 0, 0, out.getWidth(), out.getHeight());
double[] outElevations=intsToDoubles(outPixels);
}
static int[] doublesToInts(double[] elevations)
{
int[] out=new int[elevations.length];
for (int i=0;i<elevations.length;i++)
{
int tmp=(int) (elevations[i]*1000000);
out[i]=0xFF000000|tmp>>8;
}
return out;
}
static double[] intsToDoubles(int[] pixels)
{
double[] out=new double[pixels.length];
for (int i=0;i<pixels.length;i++)
out[i]=(pixels[i]<<8)/1000000.0;
return out;
}
As color with red, green, blue and alpha (opacity/transparency). Start with all pixels transparent. and fill in the corresponding value as (R, G, B), non-transparent (the high eight bits. (Or an other convention for "not filled in."
RGB form the lower 24 bits of an integer.
Longitude and latitude to x and y
Elevation to integer less 0x01_00_00_00. And vice versa:
double elevation = 238.27;
int code = (int)(elevation * 100);
Color c = new Color(code); // BufferedImage uses int, so 'code' sufThat does not fices.
code = c.getRGB();
elevation = ((double)code) / 100;
BufferedImage with setRGB(code) or so (there are different possibilities).
Use Oracles javadoc, by googling after BufferedImage and such.
To fill unused pixels do an avaraging, in a second BufferedImage. So as never to average to original pixels.
P.S. for my Netherlands elevation might be less than zero, so maybe + ... .
Hey all I'm trying to implement 3D picking into my program, and it works perfectly if I don't move from the origin. It is perfectly accurate. But if I move the model matrix away from the origin (the viewmatrix eye is still at 0,0,0) the picking vectors are still drawn from the original location. It should still be drawing from the view matrix eye (0,0,0) but it isn't. Here's some of my code to see if you can find out why..
Vector3d near = unProject(x, y, 0, mMVPMatrix, this.width, this.height);
Vector3d far = unProject(x, y, 1, mMVPMatrix, this.width, this.height);
Vector3d pickingRay = far.subtract(near);
//pickingRay.z *= -1;
Vector3d normal = new Vector3d(0,0,1);
if (normal.dot(pickingRay) != 0 && pickingRay.z < 0)
{
float t = (-5f-normal.dot(mCamera.eye))/(normal.dot(pickingRay));
pickingRay = mCamera.eye.add(pickingRay.scale(t));
addObject(pickingRay.x, pickingRay.y, pickingRay.z+.5f, Shape.BOX);
//a line for the picking vector for debugging
PrimProperties a = new PrimProperties(); //new prim properties for size and center
Prim result = null;
result = new Line(a, mCamera.eye, far);//new line object for seeing look at vector
result.createVertices();
objects.add(result);
}
public static Vector3d unProject(
float winx, float winy, float winz,
float[] resultantMatrix,
float width, float height)
{
winy = height-winy;
float[] m = new float[16],
in = new float[4],
out = new float[4];
Matrix.invertM(m, 0, resultantMatrix, 0);
in[0] = (winx / width) * 2 - 1;
in[1] = (winy / height) * 2 - 1;
in[2] = 2 * winz - 1;
in[3] = 1;
Matrix.multiplyMV(out, 0, m, 0, in, 0);
if (out[3]==0)
return null;
out[3] = 1/out[3];
return new Vector3d(out[0] * out[3], out[1] * out[3], out[2] * out[3]);
}
Matrix.translateM(mModelMatrix, 0, this.diffX, this.diffY, 0); //i use this to move the model matrix based on pinch zooming stuff.
Any help would be greatly appreciated! Thanks.
I wonder which algorithm you have implemented. Is it a ray casting approach to the problem?
I didn't focus much on the code itself but this looks a way too simple implementation to be a fully operational ray casting solution.
In my humble experience, i would like to suggest you, depending on the complexity of your final project (which I don't know), to adopt a color picking solution.
This solution is usually the most flexible and the easiest to be implemented.
It consist in the rendering of the objects in your scene with unique flat colors (usually you disable lighting as well in your shaders) to a backbuffer...a texture, then you acquire the coordinates of the click (touch) and you read the color of the pixel in that specific coordinates.
Having the color of the pixel and the tables of the colors of the different objects you rendered, makes possible for you to understand what the user clicked from a logical perspective.
There are other approaches to the object picking problem, this is probably universally recognized as the fastest one.
Cheers
Maurizio
I am trying to darken an image in java but instead it is turning plain black.
Here is the code that i am using..
float[] elements = {factor};
Kernel kernel = new Kernel(1, 1, elements);
ConvolveOp op = new ConvolveOp(kernel);
BufferedImage bufferedImage = new BufferedImage(image.getWidth(), image.getHeight(), image.getType());
op.filter(image, bufferedImage);
Any ideas what I am doing wrong?
I think you are missing the right number for the factor, a really good way to experiment with this is with the Gimp, you can go to filters -> generic -> convolution matrix and try out different factors, I can darken my image with a factor 0.7 and very low becomes too black.
Let me know how it went.
I want to do a simple color to grayscale conversion using java.awt.image.BufferedImage. I'm a beginner in the field of image processing, so please forgive if I confused something.
My input image is an RGB 24-bit image (no alpha), I'd like to obtain a 8-bit grayscale BufferedImage on the output, which means I have a class like this (details omitted for clarity):
public class GrayscaleFilter {
private BufferedImage colorFrame;
private BufferedImage grayFrame =
new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
I've succesfully tried out 2 conversion methods until now, first being:
private BufferedImageOp grayscaleConv =
new ColorConvertOp(ColorSpace.getInstance(ColorSpace.CS_GRAY), null);
protected void filter() {
grayscaleConv.filter(colorFrame, grayFrame);
}
And the second being:
protected void filter() {
WritableRaster raster = grayFrame.getRaster();
for(int x = 0; x < raster.getWidth(); x++) {
for(int y = 0; y < raster.getHeight(); y++){
int argb = colorFrame.getRGB(x,y);
int r = (argb >> 16) & 0xff;
int g = (argb >> 8) & 0xff;
int b = (argb ) & 0xff;
int l = (int) (.299 * r + .587 * g + .114 * b);
raster.setSample(x, y, 0, l);
}
}
}
The first method works much faster but the image produced is very dark, which means I'm losing bandwidth which is unacceptable (there is some color conversion mapping used between grayscale and sRGB ColorModel called tosRGB8LUT which doesn't work well for me, as far as I can tell but I'm not sure, I just suppose those values are used). The second method works slower, but the effect is very nice.
Is there a method of combining those two, eg. using a custom indexed ColorSpace for ColorConvertOp? If yes, could you please give me an example?
Thanks in advance.
public BufferedImage getGrayScale(BufferedImage inputImage){
BufferedImage img = new BufferedImage(inputImage.getWidth(), inputImage.getHeight(), BufferedImage.TYPE_BYTE_GRAY);
Graphics g = img.getGraphics();
g.drawImage(inputImage, 0, 0, null);
g.dispose();
return img;
}
There's an example here which differs from your first example in one small aspect, the parameters to ColorConvertOp. Try this:
protected void filter() {
BufferedImageOp grayscaleConv =
new ColorConvertOp(colorFrame.getColorModel().getColorSpace(),
grayFrame.getColorModel().getColorSpace(), null);
grayscaleConv.filter(colorFrame, grayFrame);
}
Try modifying your second approach. Instead of working on a single pixel, retrieve an array of argb int values, convert that and set it back.
The second method is based on pixel's luminance therefore it obtains more favorable visual results. It could be sped a little bit by optimizing the expensive floating point arithmetic operation when calculate l using lookup array or hash table.
Here is a solution that has worked for me in some situations.
Take image height y, image width x, the image color depth m, and the integer bit size n. Only works if (2^m)/(x*y*2^n) >= 1.
Keep a n bit integer total for each color channel as you process the initial gray scale values. Divide each total by the (x*y) for the average value avr[channel] of each channel. Add (192 - avr[channel]) to each pixel for each channel.
Keep in mind that this approach probably won't have the same level of quality as standard luminance approaches, but if you're looking for a compromise between speed and quality, and don't want to deal with expensive floating point operations, it may work for you.