Affine transform with interpolation - java

I would like to do an affine transformation on a very low resolution bitmap and I would like to do it while preserving the maximum amount of information.
My input data is a 1 bit 64-by-64 pixel image of hand written character and my output would be greyscale and higher resolution. Upon analysing the image I construct a series of affine transformations (rotation, scaling, shear, translation) what I could multiply into a single affine transformation matrix.
My problem is that given the input image and my computed affine transformation matrix, how can I calculate my output image in the highest possible quality? I have read articles about different interpolation techniques, but all of them are about how to do interpolation for scaling, and not for general affine transforms.
Here is a demo what is doing exactly what I am looking for. Given an affine transformation matrix and an interpolation technique it calculates an image.
http://bigwww.epfl.ch/demo/jaffine/index.html
Can you explain me what are the steps required for calculating a higher resolution (for example 4x) greyscale image, if I have a lower resolution 1-bit input and a given T affine transformation matrix?
Can you link me to some source code or tutorials or articles or possibly even books about how to implement a linear, cubic or better interpolation with affine transform?
I need to implement this problem in Java, and I know Java has an Affine class, but I don't know if it implements interpolation. Do you know any C++ or Java library what has nice to read code for figuring out how to write an algorithm for doing affine transform using interpolation?
Are there any freely available libraries for Java or C++ which have built-in functions for calculating affine transform using interpolation?

The same people you linked to have a C implementation with several interpolation options here. You could probably use JNI to wrap it. There is also JavaCV, which wraps OpenCV. OpenCV contains the warpAffine, which has interpolation. Also, check out the Java Advanced Imaging API here.

OK, here is the solution I ended up with.
I transformed all my array[][] into a BufferedImage object
static BufferedImage BImageFrom2DArray(float data[][]) {
int width = data.length;
int height = data[0].length;
BufferedImage myimage = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int value = (int) ((1f - data[x][y]) * 255f);
myimage.setRGB(y, x, (value << 16) | (value << 8) | value);
}
}
return myimage;
}
Applied the affine transformation using AffineTransformOp with interpolation bicubic
AffineTransformOp op = new AffineTransformOp(tx, AffineTransformOp.TYPE_BICUBIC);
BufferedImage im_transformed = op.filter(im_src, null);
Transformed back the BufferedImage object into array[][]:
static float[][] ArrayFromBImage(BufferedImage bimage, int width, int height) {
int max_x = bimage.getWidth();
int max_y = bimage.getHeight();
float[][] array = new float[width][height];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
float red, alpha, value;
int color;
if (x >= max_x || y >= max_y) {
array[y][x] = 0;
} else {
color = bimage.getRGB(x, y);
alpha = (color >> 24) & 0xFF;
red = (color >> 16) & 0xFF;
value = 1f - red / 255;
if (alpha == 0) {
array[y][x] = 0;
} else {
array[y][x] = value;
}
}
}
}
return array;
}

Related

2d double array to image

I'm currently working on a simulation with continuous agents, which leave a pheromone trail on a 2d double array. The pheromone trails need to be on a 2d array because of a diffusion with a mean filter that needs to be performed. Ultimately, I need to visualise the agents and the pheromone trails, by transforming the double array directly into an awt.Image.
Basically create a BufferedImage, as suggested by Gilbert Le Blanc and use its setRGB method to set the pixels (or get its Graphics to draw on it).
Example, assuming values are between 0.0 and 1.0, converting to gray:
private static BufferedImage create(double[][] array) {
var image = new BufferedImage(array.length, array[0].length, BufferedImage.TYPE_INT_RGB);
for (var row = 0; row < array.length; row++) {
for (var col = 0; col < array[row].length; col++) {
image.setRGB(col, row, doubleToRGB(array[row][col]));
}
}
return image;
}
private static int doubleToRGB(double d) {
var gray = (int) (d * 256);
if (gray < 0) gray = 0;
if (gray > 255) gray = 255;
return 0x010101 * gray;
}
The doubleToRGB can be changed to use more complicated mapping from value to color.
Example red for lower values, blue for higher:
private static int doubleToRGB(double d) {
float hue = (float) (d / 1.5);
float saturation = 1;
float brightness = 1;
return Color.HSBtoRGB(hue, saturation, brightness);
}
Note: posted code is just to show the idea - can/must be optimized - missing error checking
Note 2: posted mapping to gray is not necessarily the best calculation regarding our perception

Java Convolution

Hi I am in need of some help. I need to write a convolution method from scratch that takes in the following inputs: int[][] and BufferedImage inputImage. I can assume that the kernel has size 3x3.
My approach is to do the follow:
convolve inner pixels
convolve corner pixels
convolve outer pixels
In the program that I will post below I believe I convolve the inner pixels but I am a bit lost at how to convolve the corner and outer pixels. I am aware that corner pixels are at (0,0), (width-1,0), (0, height-1) and (width-1,height-1). I think I know to how approach the problem but not sure how to execute that in writing though. Please to aware that I am very new to programming :/ Any assistance will be very helpful to me.
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
}
public BufferedImage convolveInner(double center, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 1; x < width - 1; x++) {
for (int y = 1; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) center*red;
int innergreen = (int) center*green;
int innerblue = (int) center*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage1.setRGB(x, y, newRgbvalue);
}
}
return inputImage1;
}
public BufferedImage convolveEdge(double edge, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage2 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) edge*red;
int innergreen = (int) edge*green;
int innerblue = (int) edge*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage2.setRGB(x, y, newRgbvalue);
}
}
return inputImage2;
}
public BufferedImage convolveCorner(double corner, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage3 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//inner pixels
for (int x = 0; x < width - 1; x++) {
for (int y = 0; y < height - 1; y ++) {
//get pixels at x, y
int colorValue = inputImage.getRGB(x, y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed() ;
int green = pixelColor.getGreen() ;
int blue = pixelColor.getBlue();
int innerred = (int) corner*red;
int innergreen = (int) corner*green;
int innerblue = (int) corner*blue;
Color newPixelColor = new Color(innerred, innergreen, innerblue);
int newRgbvalue = newPixelColor.getRGB();
inputImage3.setRGB(x, y, newRgbvalue);
}
}
return inputImage3;
}
public static void main(String[] args) {
DrawingKit dk = new DrawingKit("Compositor", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p5 = c.convolve();
dk.drawPicture(p5, 0, 100);
}
}
I changed the code a bit but the output comes out as black. What did I do wrong:
import java.awt.*;
import java.awt.image.BufferedImage;
import com.programwithjava.basic.DrawingKit;
import java.util.Scanner;
public class Problem28 {
// maximum value of a sample
private static final int MAX_VALUE = 255;
//minimum value of a sample
private static final int MIN_VALUE = 0;
public BufferedImage convolve(int[][] kernel, BufferedImage inputImage) {
int width = inputImage.getWidth();
int height = inputImage.getHeight();
BufferedImage inputImage1 = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
//for every pixel
for (int x = 0; x < width; x ++) {
for (int y = 0; y < height; y ++) {
int colorValue = inputImage.getRGB(x,y);
Color pixelColor = new Color(colorValue);
int red = pixelColor.getRed();
int green = pixelColor.getGreen();
int blue = pixelColor.getBlue();
double gray = 0;
//multiply every value of kernel with corresponding image pixel
for (int i = 0; i < 3; i ++) {
for (int j = 0; j < 3; j ++) {
int imageX = (x - 3/2 + i + width) % width;
int imageY = (x -3/2 + j + height) % height;
int RGB = inputImage.getRGB(imageX, imageY);
int GRAY = (RGB) & 0xff;
gray += (GRAY*kernel[i][j]);
}
}
int out;
out = (int) Math.min(Math.max(gray * 1, 0), 255);
inputImage1.setRGB(x, y, new Color(out,out,out).getRGB());
}
}
return inputImage1;
}
public static void main(String[] args) {
int[][] newArray = {{1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}, {1/9, 1/9, 1/9}};
DrawingKit dk = new DrawingKit("Problem28", 1000, 1000);
BufferedImage p1 = dk.loadPicture("image/pattern1.jpg");
Problem28 c = new Problem28();
BufferedImage p2 = c.convolve(newArray, p1);
dk.drawPicture(p2, 0, 100);
}
}
Welcome ewuzz! I wrote a convolution using CUDA about a week ago, and the majority of my experience is with Java, so I feel qualified to provide advice for this problem.
Rather than writing all of the code for you, the best way to solve this large program is to discuss individual elements. You mentioned you are very new to programming. As the programs you write become more complex, it's essential to write small working snippets before combining them into a large successful program (or iteratively add snippets). With this being said, it's already apparent you're trying to debug a ~100 line program, and this approach will cost you time in most cases.
The first point to discuss is the general approach you mentioned. If you think about the program, what is the simplest and most repeated step? Obviously this is the kernel/mask step, so we can start from here. When you convolute each pixel, you are performing a similar option, regardless of the position (corner, edge, inside). While there are special steps necessary for these edge cases, they share similar underlying steps. If you try to write code for each of these cases separately, you will have to update the code in multiple (three) places with each adjustment and it will make the whole program more difficult to grasp.
To support my point above, here's what happened when I pasted your code into IntelliJ. This illustrates the (yellow) red flag of using the same code in multiple places:
The concrete way to fix this problem is to combine the three convolve methods into a single one and use if statements for edge-cases as necessary.
Our pseudocode with this change:
convolve(kernel, inputImage)
for each pixel in the image
convolve the single pixel and check edge cases
endfor
end
That seems pretty basic right? If we are able to successfully check edge cases, then this extremely simple logic will work. The reason I left it so general above to show how convolve the single pixel and check edge cases is logically grouped. This means it's a good candidate for extracting a method, which could look like:
private void convolvePixel(int x, int y, int[][] kernel, BufferedImage input, BufferedImage output)
Now to implement our method above, we will need to break it into a few steps, which we may then break into more steps if necessary. We'll need to look at the input image, if possible for each pixel accumulate the values using the kernel, and then set this in the output image. For brevity I will only write pseudocode from here.
convolvePixel(x, y, kernel, input, output)
accumulation = 0
for each row of kernel applicable pixels
for each column of kernel applicable pixels
if this neighboring pixel location is within the image boundaries then
input color = get the color at this neighboring pixel
adjusted value = input color * relative kernel mask value
accumulation += adjusted value
else
//handle this somehow, mentioned below
endif
endfor
endfor
set output pixel as accumulation, assuming this convolution method does not require normalization
end
The pseudocode above is already relatively long. When implementing you could write methods for the if and the else cases, but it you should be fine with this structure.
There are a few ways to handle the edge case of the else above. Your assignment probably specifies a requirement, but the fancy way is to tile around, and pretend like there's another instance of the same image next to this input image. Wikipedia explains three possibilities:
Extend - The nearest border pixels are conceptually extended as far as necessary to provide values for the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in lines.
Wrap - (The method I mentioned) The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or corner.
Crop - Any pixel in the output image which would require values from beyond the edge is skipped. This method can result in the output image being slightly smaller, with the edges having been cropped.
A huge part of becoming a successful programmer is researching on your own. If you read about these methods, work through them on paper, run your convolvePixel method on single pixels, and compare the output to your results by hand, you will find success.
Summary:
Start by cleaning-up your code before anything.
Group the same code into one place.
Hammer out a small chunk (convolving a single pixel). Print out the result and the input values and verify they are correct.
Draw out edge/corner cases.
Read about ways to solve edge cases and decide what fits your needs.
Try implementing the else case through the same form of testing.
Call your convolveImage method with the loop, using the convolvePixel method you know works. Done!
You can look up pseudocode and even specific code to solve the exact problem, so I focused on providing general insight and strategies I have developed through my degree and personal experience. Good luck and please let me know if you want to discuss anything else in the comments below.
Java code for multiple blurs via convolution.

Converting grayscale image pixels to defined scale

I'm looking to use a very crude heightmap I've created in Photoshop to define a tiled isometric grid for me:
Map:
http://i.imgur.com/jKM7AgI.png
I'm aiming to loop through every pixel in the image and convert the colour of that pixel to a scale of my choosing, for example 0-100.
At the moment I'm using the following code:
try
{
final File file = new File("D:\\clouds.png");
final BufferedImage image = ImageIO.read(file);
for (int x = 0; x < image.getWidth(); x++)
{
for (int y = 0; y < image.getHeight(); y++)
{
int clr = image.getRGB(x, y) / 99999;
if (clr <= 0)
clr = -clr;
System.out.println(clr);
}
}
}
catch (IOException ex)
{
// Deal with exception
}
This works to an extent; the black pixel at position 0 is 167 and the white pixel at position 999 is 0. However when I insert certain pixels into the image I get slightly odd results, for example a gray pixel that's very close to white returns over 100 when I would expect it to be in single digits.
Is there an alternate solution I could use that would yield more reliable results?
Many thanks.
Since it's a grayscale map, the RGB parts will all be the same value (with range 0 - 255), so just take one out of the packed integer and find out what percent of 255 it is:
int clr = (int) ((image.getRGB(x, y) & 0xFF) / 255.0 * 100);
System.out.println(clr);
getRGB returns all channels packed into one int so you shouldn't do arithmetic with it. Maybe use the norm of the RGB-vector instead?
for (int x = 0; x < image.getWidth(); ++x) {
for (int y = 0; y < image.getHeight(); ++y) {
final int rgb = image.getRGB(x, y);
final int red = ((rgb & 0xFF0000) >> 16);
final int green = ((rgb & 0x00FF00) >> 8);
final int blue = ((rgb & 0x0000FF) >> 0);
// Norm of RGB vector mapped to the unit interval.
final double intensity =
Math.sqrt(red * red + green * green + blue * blue)
/ Math.sqrt(3 * 255 * 255);
}
}
Note that there is also the java.awt.Color class that can be instantiated with the int returned by getRGB and provides getRed, getGreen and getBlue methods if you don't want to do the bit manipulations yourself.

Java: BufferedImage INT_RGB Alpha?

Is there any way to use alpha in a BufferedImage that uses INT_RGB? I'm using a 1D pixel array to render sprites onto the screen but I wanted to be able to use Alpha. Is there any way to mix the colors and achieve some sort of layer system like in photoshop?
I've been trying to create some custom alpha by mixing colors, but im not quite sure how to do that either.
This is what I have so far:
BufferedImage & Pixel array:
BufferedImage image = new BufferedImage(width / scale, height / scale, BufferedImage.TYPE_INT_RGB);
int[] pixels = ((DataBufferInt)image.getRaster().getDataBuffer()).getData();
Method used to render sprite:
public static void drawSprite(Sprite sprite, int coord_x, int coord_y) {
int boundsX = (coord_x + sprite.width),
boundsY = (coord_y + sprite.height),
index = -1, pixels[] = Screen.pixels;
for(int y = coord_y; y < boundsY; y++) {
for(int x = coord_x; x < boundsX; x++) {
index++;
if(Screen.pixels[x + y * width] == 0) {
Screen.pixels[x + y * width] = sprite.pixels[index];
} else {
int[] screenPixel = intToARGB(Screen.pixels[x + y * width]);
int[] spritePixel = intToARGB(sprite.pixels[index]);
int[] newPixel = new int[4];
newPixel[0] = (screenPixel[0] + spritePixel[0]) / 2;
newPixel[1] = (screenPixel[1] + spritePixel[1]) / 2;
newPixel[2] = (screenPixel[2] + spritePixel[2]) / 2;
newPixel[3] = (screenPixel[3] + spritePixel[3]) / 2;
Screen.pixels[x + y * width] = Integer.parseInt((Integer.toString(newPixel[0]) +
Integer.toString(newPixel[1]) +
Integer.toString(newPixel[2]) +
Integer.toString(newPixel[3])));
}
}
}
}
Is there any way to use alpha in a BufferedImage that uses INT_RGB?
Short answer: No.
Long answer: No, a BufferedImage of type INT_RGB doesn't contain alpha (unless you redefine what R, G and B means, that is...). But it's of course possible to correctly compose other types of BufferedImage with alpha (like TYPE_INT_ARGB, TYPE_4BYTE_ABGR or even TYPE_BYTE_INDEXED with alpha or transparent pixel in the IndexColorModel) onto a BufferedImage that uses TYPE_INT_RGB. Maybe this is what you are trying to do?
Side note: Java2D already implements image compositing and different types of (Porter/Duff) alpha blending rules in the class AlhpaComposite. Using this class, you get possibly hardware accelerated alpha blending that is super fast. I don't understand why you want to re-implement this.

Change the alpha value of a BufferedImage?

How do I change the global alpha value of a BufferedImage in Java? (I.E. make every pixel in the image that has a alpha value of 100 have a alpha value of 80)
#Neil Coffey:
Thanks, I've been looking for this too; however, Your code didn't work very well for me (white background became black).
I coded something like this and it works perfectly:
public void setAlpha(byte alpha) {
alpha %= 0xff;
for (int cx=0;cx<obj_img.getWidth();cx++) {
for (int cy=0;cy<obj_img.getHeight();cy++) {
int color = obj_img.getRGB(cx, cy);
int mc = (alpha << 24) | 0x00ffffff;
int newcolor = color & mc;
obj_img.setRGB(cx, cy, newcolor);
}
}
}
Where obj_img is BufferedImage.TYPE_INT_ARGB.
I change alpha with setAlpha((byte)125); alpha range is now 0-255.
Hope someone finds this useful.
I don't believe there's a single simple command to do this. A few options:
copy into another image with an AlphaComposite specified (downside: not converted in place)
directly manipulate the raster (downside: can lead to unmanaged images)
use a filter or BufferedImageOp
The first is the simplest to implement, IMO.
This is an old question, so I'm not answering for the sake of the OP, but for those like me who find this question later.
AlphaComposite
As #Michael's excellent outline mentioned, an AlphaComposite operation can modify the alpha channel. But only in certain ways, which to me are somewhat difficult to understand:
is the formula for how the "over" operation affects the alpha channel. Moreover, this affects the RGB channels too, so if you have color data that needs to be unchanged, AlphaComposite is not the answer.
BufferedImageOps
LookupOp
There are several varieties of BufferedImageOp (see 4.10.6 here). In the more general case, the OP's task could be met by a LookupOp, which requires building lookup arrays. To modify only the alpha channel, supply an identity array (an array where table[i] = i) for the RGB channels, and a separate array for the alpha channel. Populate the latter array with table[i] = f(i), where f() is the function by which you want to map from old alpha value to new. E.g. if you want to "make every pixel in the image that has a alpha value of 100 have a alpha value of 80", set table[100] = 80. (The full range is 0 to 255.) See how to increase opacity in gaussian blur for a code sample.
RescaleOp
But for a subset of these cases, there is a simpler way to do it, that doesn't require setting up a lookup table. If f() is a simple, linear function, use a RescaleOp. For example, if you want to set newAlpha = oldAlpha - 20, use a RescaleOp with a scaleFactor of 1 and an offset of -20. If you want to set newAlpha = oldAlpha * 0.8, use a scaleFactor of 0.8 and an offset of 0. In either case, you again have to provide dummy scaleFactors and offsets for the RGB channels:
new RescaleOp({1.0f, 1.0f, 1.0f, /* alpha scaleFactor */ 0.8f},
{0f, 0f, 0f, /* alpha offset */ -20f}, null)
Again see 4.10.6 here for some examples that illustrate the principles well, but are not specific to the alpha channel.
Both RescaleOp and LookupOp allow modifying a BufferedImage in-place.
for a nicer looking alpha change effect, you can use relative alpha change per pixel (rather than static set, or clipping linear)
public static void modAlpha(BufferedImage modMe, double modAmount) {
//
for (int x = 0; x < modMe.getWidth(); x++) {
for (int y = 0; y < modMe.getHeight(); y++) {
//
int argb = modMe.getRGB(x, y); //always returns TYPE_INT_ARGB
int alpha = (argb >> 24) & 0xff; //isolate alpha
alpha *= modAmount; //similar distortion to tape saturation (has scrunching effect, eliminates clipping)
alpha &= 0xff; //keeps alpha in 0-255 range
argb &= 0x00ffffff; //remove old alpha info
argb |= (alpha << 24); //add new alpha info
modMe.setRGB(x, y, argb);
}
}
}
I'm 99% sure the methods that claim to deal with an "RGB" value packed into an int actually deal with ARGB. So you ought to be able to do something like:
for (all x,y values of image) {
int argb = img.getRGB(x, y);
int oldAlpha = (argb >>> 24);
if (oldAlpha == 100) {
argb = (80 << 24) | (argb & 0xffffff);
img.setRGB(x, y, argb);
}
}
For speed, you could maybe use the methods to retrieve blocks of pixel values.
You may need to first copy your BufferedImage to an image of type BufferedImage.TYPE_INT_ARGB. If your image is of type, say, BufferedImage.TYPE_INT_RGB, then the alpha component won't be set correctly. If your BufferedImage is of type BufferedImage.TYPE_INT_ARGB, then the code below works.
/**
* Modifies each pixel of the BufferedImage so that the selected component (R, G, B, or A)
* is adjusted by delta. Note: the BufferedImage must be of type BufferedImage.TYPE_INT_ARGB.
* #param src BufferedImage of type BufferedImage.TYPE_INT_ARGB.
* #param colorIndex 0=red, 1=green, 2=blue, 3= alpha
* #param delta amount to change component
* #return
*/
public static BufferedImage adjustAColor(BufferedImage src,int colorIndex, int delta) {
int w = src.getWidth();
int h = src.getHeight();
assert(src.getType()==BufferedImage.TYPE_INT_ARGB);
for (int y = 0; y < h; y++)
for (int x = 0; x < w; x++) {
int rgb = src.getRGB(x,y);
java.awt.Color color= new java.awt.Color(rgb,true);
int red=color.getRed();
int green=color.getGreen();
int blue=color.getBlue();
int alpha=color.getAlpha();
switch (colorIndex) {
case 0: red=adjustColor(red,delta); break;
case 1: green=adjustColor(green,delta); break;
case 2: blue=adjustColor(blue,delta); break;
case 3: alpha=adjustColor(alpha,delta); break;
default: throw new IllegalStateException();
}
java.awt.Color adjustedColor=new java.awt.Color(red,green,blue,alpha);
src.setRGB(x,y,adjustedColor.getRGB());
int gottenColorInt=src.getRGB(x,y);
java.awt.Color gottenColor=new java.awt.Color(gottenColorInt,true);
assert(gottenColor.getRed()== red);
assert(gottenColor.getGreen()== green);
assert(gottenColor.getBlue()== blue);
assert(gottenColor.getAlpha()== alpha);
}
return src;
}
private static int adjustColor(int value255, int delta) {
value255+= delta;
if (value255<0) {
value255=0;
} else if (value255>255) {
value255=255;
}
return value255;
}

Categories