Currently I have a 2D integer Array containing ARGB values of an image.
Now I want to get a Bitmap with those values to display it on the screen.
The Bitmap should get scaled as well, for example the image is 80x80 or 133x145, but I only want it to be 50x50.
Something like this, but as it's Android the Java classes are not available:
private static BufferedImage scale(BufferedImage image, int width, int height) {
java.awt.Image temp = image.getScaledInstance(width, height, java.awt.Image.SCALE_SMOOTH);
BufferedImage imageOut = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Graphics2D bGr = imageOut.createGraphics();
bGr.drawImage(temp, 0, 0, null);
bGr.dispose();
return imageOut;
}
I already searched the API and Stack Overflow but I could not find any hints, classes or methods how to do so.
First of all create a Bitmap object representing the original size image (your 80x80 or 133x145). You can do it with:
Bitmap.Config bitmapConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = Bitmap.createBitmap(width, height, bitmapConfig);
where width is your source width (80 or 133), and height is your source height (80 or 145).
Then fill this Bitmap with colors from your array. I don't know the exact way your array is built and the type of data it stores, so for the purpose of this simple concept explanation I will assume that it's just a regular, one-dimensional array that stores ARGB hex String values. Be sure to correct for loop to match your exact case:
int[] bitmapPixels = new int[width * height];
for (int i = 0, size = bitmapPixels.length; i < size; ++i) {
bitmapPixels[i] = Color.parseColor(argbArray[i]);
}
bitmap.setPixels(bitmapPixels, 0, width, 0, 0, width, height);
Then create a scaled Bitmap and recycle the original size Bitmap you created before.
Bitmap scaledBitmap = Bitmap.createScaledBitmap(bitmap, 50, 50, false);
bitmap.recycle();
Related
While trying to scale the image drawn on a canvas, I came across Graphics2D.drawImage(originalImage) [Graphics2D object obtained from BufferedImage object].
I used it to draw the image into the graphics created from BufferedImage, this new image could be now be drawn onto a Graphics object of the Panel/Frame to get a zoomed image.
I populated the original image using BufferedImage.setRGB.
So, what is it actually doing? Is it selectively omitting the pixels from the original image?
Similar to this code.
int newImageWidth = imageWidth * zoomLevel;
int newImageHeight = imageHeight * zoomLevel;
BufferedImage resizedImage = new BufferedImage(newImageWidth , newImageHeight, imageType);
Graphics2D g = resizedImage.createGraphics();
g.drawImage(originalImage, 0, 0, newImageWidth , newImageHeight , null);
g.dispose();
Original Answer for the above code
I'm trying to play with canvas. I could draw some triangles and fill it partially drawing a path and paint it.I used Path, Points and Line. It was a great exercise to remember trigonometry. For now I would like to do the same with a circle, as you can see below. I want set a percentage and to fill this circle until the circle's height * percentage. How could me draw a circle like that with canvas or some lib?
You should think about it a little differently. The way I'd do it is to draw a coloured rectangle (where the height is a percentage of the circle's intended height) and then crop it with a circle. This answer explains how to crop an image in a circular shape (I'd rather link than retype the code here).
I finally got do it. I created two methods. As roarster suggested, I created a white rectangle as mask where the height is a percentage of the circle's intended height.
private Bitmap drawWithPorterDuff(Bitmap original, Bitmap mask, PorterDuff.Mode mode) {
Bitmap bitmap = Bitmap.createBitmap(original.getWidth(), original.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint maskPaint = new Paint();
maskPaint.setAntiAlias(true);
canvas.drawBitmap(original, 0, 0, null);
maskPaint.setXfermode(new PorterDuffXfermode(mode));
canvas.drawBitmap(mask, 0, 0, maskPaint);
Bitmap edge = BitmapFactory.decodeResource(getResources(), R.drawable.edge);
maskPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.ADD));
canvas.drawBitmap(edge, 0, 0, maskPaint);
return bitmap;
}
public Bitmap createMask(int width, int height) {
Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);
paint.setColor(Color.WHITE);
paint.setAntiAlias(true);
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
canvas.drawRect(0, 0, width, height, paint);
return bitmap;
}
At view's constructor I created a init() method with the folling code
PorterDuff.Mode mode = PorterDuff.Mode.SRC_IN;
Bitmap original = BitmapFactory.decodeResource(getResources(), R.drawable.blue_graph);
Bitmap mask = createMask(original.getWidth(), (int) ((original.getHeight()) * (1 - percentage)));
Bitmap result = drawWithPorterDuff(original, mask, mode);
imageView.setImageBitmap(result);
Recently I have been attempting to scale pixel arrays (int[]) in Java. I used .setRGB() to add all my pixel data into the BufferedImage. BufferedImage then offers a function called .getScaledInstance(). This should work great for my purposes, but I ran into a problem. .getScaledInstance() returns a Image, not a BufferedImage. With an Image object, I cannot use .getRGB() to add all the pixel data (in int[] form) from the scaled Image back into an array. Is there a way to get raw pixel data from an Image file? Am I missing something? I looked at other questions and did a bit of googling, and they only seemed to be wanting to get picture data in a different form of array (int[][]) or in bytes. Any help would be appreciated, thanks. Also, Sprite is a class I made that is being used. Here is my code:
public Sprite scaleSprite(Sprite s, int newWidth, int newHeight){
BufferedImage image = new BufferedImage(s.getWidth(), s.getHeight(), BufferedImage.TYPE_INT_RGB);
for(int y = 0; y < s.getHeight(); y++){
for(int x = 0; x < s.getWidth(); x++){
image.setRGB(x, y, s.getPixel(x, y));
}
}
Image newImage = image.getScaledInstance(newWidth, newHeight, Image.SCALE_AREA_AVERAGING);
Sprite newS = new Sprite(newWidth, newHeight);
int[] pixels = new int[newWidth * newHeight];
newImage.getRGB(0, 0, newWidth, newHeight, pixels, 0, newWidth); //This is where I am running into problems. newImage is an Image and I cannot retrieve the raw pixel data from it.
newS.setPixels(pixels);
return newS;
}
You can draw the resulting Image onto a BufferedImage like this:
Image newImage = image.getScaledInstance(newWidth, newHeight, Image.SCALE_AREA_AVERAGING);
BufferedImage buffImg = new BufferedImage(newWidth, newHeight, BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D g2 = (Graphics2D) buffImg.getGraphics();
g2.drawImage(newImage, 0, 0, 10, 10, null);
g2.dispose();
Or you can scale the image directly by drawing it on another BufferedImage:
BufferedImage scaled = new BufferedImage(newWidth, newWidth, BufferedImage.TYPE_4BYTE_ABGR);
Graphics2D g2 = (Graphics2D) scaled.getGraphics();
g2.drawImage(originalImage, 0, 0, newWidth, newWidth, 0, 0, originalImage.getWidth(), originalImage.getHeight(), null);
g2.dispose();
The second approach will work correctly if the two BufferedImages have the same aspect ratio.
To be clear, getScaledInstance() is a method of Image, not BufferedImage. You don't generally want to revert to working directly with the Image superclass once you're working with BufferedImage; Image is really not easy to work with.
Please see if this will help: How to scale a BufferedImage
Or from Scaling a BufferedImage, where they yield the following example:
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
public class Main {
public static void main(String[] argv) throws Exception {
BufferedImage bufferedImage = new BufferedImage(200, 200,
BufferedImage.TYPE_BYTE_INDEXED);
AffineTransform tx = new AffineTransform();
tx.scale(1, 2);
AffineTransformOp op = new AffineTransformOp(tx,
AffineTransformOp.TYPE_BILINEAR);
bufferedImage = op.filter(bufferedImage, null);
}
This will give you the ability to scale entirely at the level of BufferedImage. From there you can apply whatever sprite specific or array data algorithm you wish.
I'm struggling to understand how to merge 4 pictures together in java, I want to copy each image to the merged image with the overlapping 20 pixels blended in a 50% merge. To give the merged image a 20 pixel boundary that is a blend of the appropriate portion of each image.
So a 4 image box with the images blended into each other by 20 pixels. Not sure how I should use the width and height of the images as it is very confusing.
Something like this. How to do it?
I got all of my info from: AlphaComposite, Compositing Graphics, Concatenating Images.
The following program is improved. It uses two methods: joinHorizontal and joinVertical to join the images. Inside the methods, the following happens
the second image is copied, but only the part that overlaps
the copied image is set at half alpha (transparency)
on the canvas of the 'return image', the first image is painted, followed by the second without the overlapping part
the copied image is painted onto the canvas.
the image is returned
Why do I only set one image at half alpha and not both?
Picture a clear, glass window:
Paint random points red so that half of the window is covered with red. Now, treat the window with the red dots as your new canvas.
Paint random points blue so that the new "canvas" is half covered with blue. The window won't be completely covered; you will still be able to see through it.
But let's imagine that we first painted the window red, and then painted half of it blue. Now, it will be half blue and half red, but not transparent at all.
public class ImageMerger {
/**
* #param args
* #throws IOException
*/
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
BufferedImage img1 = //some code here
BufferedImage img2 = //some code here
BufferedImage img3 = //some code here
BufferedImage img4 = //some code here
int mergeWidth = 20; // pixels to merge.
BufferedImage merge = ImageMerger.joinVertical(
ImageMerger.joinHorizontal(img1, img2, mergeWidth),
ImageMerger.joinHorizontal(img3, img4, mergeWidth),mergeWidth);
//do whatever you want with merge. gets here in about 75 milliseconds
}
public static BufferedImage joinHorizontal(BufferedImage i1, BufferedImage i2, int mergeWidth){
if (i1.getHeight() != i2.getHeight()) throw new IllegalArgumentException("Images i1 and i2 are not the same height");
BufferedImage imgClone = new BufferedImage(mergeWidth, i2.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D cloneG = imgClone.createGraphics();
cloneG.drawImage(i2, 0, 0, null);
cloneG.setComposite(AlphaComposite.getInstance(AlphaComposite.DST_IN, 0.5f));
cloneG.drawImage(i2, 0, 0, null);
BufferedImage result = new BufferedImage(i1.getWidth() + i2.getWidth()
- mergeWidth, i1.getHeight(), BufferedImage.TYPE_INT_ARGB);
Graphics2D g = result.createGraphics();
g.drawImage(i1, 0, 0, null);
g.drawImage(i2.getSubimage(mergeWidth, 0, i2.getWidth() - mergeWidth,
i2.getHeight()), i1.getWidth(), 0, null);
g.drawImage(imgClone, i1.getWidth() - mergeWidth, 0, null);
return result;
}
public static BufferedImage joinVertical(BufferedImage i1, BufferedImage i2, int mergeWidth){
if (i1.getWidth() != i2.getWidth()) throw new IllegalArgumentException("Images i1 and i2 are not the same width");
BufferedImage imgClone = new BufferedImage(i2.getWidth(), mergeWidth, BufferedImage.TYPE_INT_ARGB);
Graphics2D cloneG = imgClone.createGraphics();
cloneG.drawImage(i2, 0, 0, null);
cloneG.setComposite(AlphaComposite.getInstance(AlphaComposite.DST_IN, 0.5f));
cloneG.drawImage(i2, 0, 0, null);
BufferedImage result = new BufferedImage(i1.getWidth(),
i1.getHeight() + i2.getHeight() - mergeWidth, BufferedImage.TYPE_INT_ARGB);
Graphics2D g = result.createGraphics();
g.drawImage(i1, 0, 0, null);
g.drawImage(i2.getSubimage(0, mergeWidth, i2.getWidth(),
i2.getHeight() - mergeWidth), 0, i1.getHeight(), null);
g.drawImage(imgClone, 0, i1.getHeight() - mergeWidth, null);
return result;
}
}
I am creating this Android game with Java. However, I load the bitmaps and then resize them to fit screens and such (dpi isn't really exact). BUT my thought is also to load the bitmaps in 16b (mBitmapOptions.inPreferredConfig = Bitmap.Config.ARGB_4444) for devices with a small amount of ram. But when I resize the bitmaps they seem to go back to 32b (Bitmap.Config.ARGB_8888).
This is how I declare the options:
mBitmapOptions = new BitmapFactory.Options();
mBitmapOptions.inPreferredConfig = Bitmap.Config.ARGB_4444;
This is how I load the bitmaps:
mBitmaps.add(getResizedBitmap(BitmapFactory.decodeResource(mResources, imagePath, mBitmapOptions)));
And this is the getResizeBitmap method:
public Bitmap getResizedBitmap(Bitmap bm)
{
//Original size
int width = bm.getWidth();
int height = bm.getHeight();
//New size (percent)
float newWidth = 1 * mScaleWidth;
float newHeight = 1 * mScaleHeight;
//Create the matrix
Matrix matrix = new Matrix();
matrix.postScale(newWidth, newHeight);
//Recreate the new Bitmap
Bitmap resizedBitmap = Bitmap.createBitmap(bm, 0, 0, width, height, matrix, true);
//Recycle the old Bitmap
bm.recycle();
return resizedBitmap;
}
Any ideas why the new Bitmap ignores the options?
Why don't you use the createScaledBitmap method? It should preserve the options. In your case, you are creating a completely new Bitmap and it probably applies a default config.
EDIT: Another option would be to use your code and add a call to the copy method like this:
Bitmap smallerBitmap = resizedBitmap.copy (Bitmap.Config.ARGB_4444, false);
resizedBitmap.recycle ();
However, I don't think this will have a nice performance...