I have a a lot of pixel art images that need to be scaled up to double the size.
It needs to be done so that each pixel in the image turns into a 2x2 set of pixels of the same exact same color, with no blending of colors.
example:
f I use ImageIO to read in a .png image as a BufferedImage with
BufferedImage foo = ImageIO.read(new File("C:\\path\\to\\image.png"));
how would I go about up-scaling it so it wont blend the pixels?
Hope it helps you
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ImageCovertTest {
public static void main(String[] args) throws IOException {
BufferedImage foo = ImageIO.read(new File("path/to/image"));
BufferedImage rs = cover(foo, 2);// cover X2
ImageIO.write(rs, "png", new File("path/to/output"));
}
private static int[][] convertToPixels(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel + 3 < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel + 2 < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
public static BufferedImage cover(BufferedImage image, int range) {
int[][] pixels = convertToPixels(image);
int width = image.getWidth();
int height = image.getHeight();
BufferedImage imageResult = new BufferedImage(width* range, height* range, BufferedImage.TYPE_INT_ARGB);
for (int x = 0; x < width * range; x ++){
for (int y = 0; y < height * range; y++) {
imageResult.setRGB(x, y, pixels[y/ range][x/ range]);
}
}
return imageResult;
}
}
Related
For my project, I'm able to print textures on objects. As soon I use nicer textures that use a color palette higher than 256 it will turn black or invisible...
Is anyone able to help me with this issue? Right now this is my code to transfer the .png into a useable texture:
public static Background getIndexedImage(int id, File file) throws IOException {
BufferedImage image = ImageIO.read(file);
List<Integer> paletteList = new LinkedList<>();
paletteList.add(0);
int width = image.getWidth();
int height = image.getHeight();
byte[] pixels = new byte[width * height];
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
int rgb = image.getRGB(x, y);
int red = rgb >> 16 & 0xff;
int green = rgb >> 8 & 0xff;
int blue = rgb & 0xff;
int alpha = rgb & 0xff;
rgb = red << 16 | green << 8 | blue;
if (alpha == 255) {
rgb = 0;
}
int index = paletteList.indexOf(rgb);
if (index == -1) {
if (paletteList.size() < 256) {
index = paletteList.size();
paletteList.add(rgb);
} else {
throw new IllegalArgumentException("The target image has more than 255 color in the palette "+id);
}
}
pixels[x + y * width] = (byte) index;
}
}
int[] palette = new int[paletteList.size()];
final AtomicInteger index = new AtomicInteger(0);
for (int pallet = 0; pallet < paletteList.size(); pallet++) {
palette[index.getAndIncrement()] = paletteList.get(pallet);
}
return new Background(width, height, palette, pixels);
}
I want to convert a colored image to grayscale without too much-predefined methods.
the code of reading the colored image is here
The main method:
BufferedImage img = null;
try {
img = ImageIO.read(new File(IMG));
int[][] pixelData = new int[img.getHeight() * img.getWidth()][3];
int[] rgb;
int counter = 0;
for (int i = 0; i < img.getWidth(); i++) {
for (int j = 0; j < img.getHeight(); j++) {
rgb = getPixelData(img, i, j);
for (int k = 0; k < rgb.length; k++) {
pixelData[counter][k] = rgb[k];
}
counter++;
}
}
} catch (IOException e) {
e.printStackTrace();
}
Another method:
private static int[] getPixelData(BufferedImage img, int x, int y) {
int argb = img.getRGB(x, y);
int rgb[] = new int[] {
(argb >> 16) & 0xff, //red
(argb >> 8) & 0xff, //green
(argb ) & 0xff //blue
};
System.out.println("rgb: " + rgb[0] + " " + rgb[1] + " " + rgb[2]);
return rgb;
}
I need to convert the output to grayscale... Here is some of the output
rgb: 255 255 255
rgb: 255 255 255
rgb: 255 255 255
rgb: 255 255 255
rgb: 255 255 255
rgb: 255 255 255
I need to check every pixel for it's RGB and convert it into grayscale.
Quick and dirty method would be:
int avg = (( rgb[0] + rgb[1] + rgb[2]) / 3);
int grey_rgb = 0;
for(int i = 0; i < 4; i++) {
grey_rgb <<= 8;
grey_rgb |= avg & 0xFF;
}
img.setRGB(x, y, grey_rgb );
A grey nuance is obtained when all its RGB components contain the same value (ex. 240 240 240). In order to achieve that, we simply need an average of numeric representation of colour's RGB components.
For more advanced algorithms please check this: tannerhelland.com/3643/grayscale-image-algorithm-vb6 .
I've been trying to paint on canvas but i can't make it work, i can see the JFrame but it seems it doesn't call the paint method when the Mover() object is being added to it. This is the first time using canvas so i don't know what am i missing. Here is code:
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.IOException;
import javax.imageio.ImageIO;
import javax.swing.*;
import java.util.*;
import java.awt.*;
import java.io.File;
public class Move extends Canvas
{
private static int [][]imgRGB;
public Move()
{
try
{
BufferedImage hugeImage = ImageIO.read(new File("C:/Users/pc/Pictures/Nave.gif"));
imgRGB = convertToRGB(hugeImage);
}
catch(IOException e)
{
System.out.println(e);
}
}
public void Paint(Graphics g)
{
super.paint(g);
for(int i=0 ; i<imgRGB.length ; i++)
{
for(int j=0 ; j<imgRGB[i].length; j++)
{
g.setColor(new Color(imgRGB[i][j]));
g.drawLine(i,j,i,j);
}
}
}
private static int[][] convertToRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
public static void main(String[] args)
{
JFrame container = new JFrame("pixel");
container.add(new Move());
container.setSize(400,400);
container.setVisible(true);
container.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
}
public void Paint(Graphics g)
Method names are case sensitive. You should override paint(...).
Always use the #Override annotation and the compiler will notify you when you attempt to override a method that doesn't exist:
#Override
public void paint(Graphics g)
{
...
}
However, you should not be overriding Canvas in a Swing application.
Instead you should extend JPanel and then you should be overriding the paintComponent(...) method.
Read the section from the Swing tutorial on Custom Painting for more information and working examples.
I'm trying to get the DCT of a bufferedImage using JTransform. When I visualise the transform it currently looks like this http://tinypic.com/r/2vcxhzo/8
In order to use Jtransform I need to convert the BufferedImage to a 2d double array. I've tried two different methods to change the bufferedImage to a double array
public double[][] convertTo2DArray(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster()
.getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
double[][] result = new double[height][width];
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
I've also tried
private double[][] bufferedImageToArray(BufferedImage image) {
int h = image.getHeight();
int w = image.getWidth();
int[][] array = new int[h][w];
double[][] result;
for (int count = 0; count < h; count++) {
for (int loop = 0; loop < w; loop++) {
int gray = image.getRGB(loop, count) & 0xFF;
// add values to array
array[count][loop] = gray;
}
}
result = toDoubleArray(array);
return result;
}
I've implemented the transform as
public double[][] applyDCT(double[][] image) {
DoubleDCT_2D transform = new DoubleDCT_2D(image.length, image[0].length);
transform.forward(image, true);
return image;
}
I tried using OpenCV's dct transform but it gives the same output as shown in the link.
Ty something like that (I kept only the blue channel for simplicity). It shows the energy compaction in the upper left corner of the result image.
import java.awt.GraphicsConfiguration;
import java.awt.GraphicsDevice;
import java.awt.GraphicsEnvironment;
import java.awt.Image;
import java.awt.Transparency;
import java.awt.image.BufferedImage;
import javax.swing.ImageIcon;
import javax.swing.JFrame;
import javax.swing.JLabel;
public class TestDCT
{
public static void main(String[] args)
{
ImageIcon icon = new ImageIcon(args[0]);
Image image = icon.getImage();
int w = image.getWidth(null);
int h = image.getHeight(null);
GraphicsDevice gs = GraphicsEnvironment.getLocalGraphicsEnvironment().getScreenDevices()[0];
GraphicsConfiguration gc = gs.getDefaultConfiguration();
BufferedImage img = gc.createCompatibleImage(w, h, Transparency.OPAQUE);
img.getGraphics().drawImage(image, 0, 0, null);
int[] rgb1 = new int[w*h];
img.getRaster().getDataElements(0, 0, w, h, rgb1);
double[] array = new double[w*h];
for (int i=0; i<w*h; i++)
array[i] = (double) (rgb1[i] & 0xFF);
org.jtransforms.dct.DoubleDCT_2D tr = new org.jtransforms.dct.DoubleDCT_2D(w, h);
tr.forward(array, true);
for (int i=0; i<w*h; i++)
{
// Grey levels
int val= Math.min((int) (array[i]+128), 255);
rgb1[i] = (val <<16) | (val << 8) | val;
}
img.getRaster().setDataElements(0, 0, w, h, rgb1);
icon = new ImageIcon(img);
JFrame frame = new JFrame("FFT");
frame.setBounds(20, 30, w, h);
frame.add(new JLabel(icon));
frame.setVisible(true);
}
}
Hi. I want rgb values in this format: In a 1d vector I want first R values, then G values, and then B Values. I tried to use this code:
pixels = new int[bitmap.getHeight() * bitmap.getWidth()];
bitmap.getPixels(pixels, 0, bitmap.getWidth(), 0, 0,
bitmap.getWidth(), bitmap.getHeight());
// int R, G, B,Y;
for (int y = 0; y < bitmap.getHeight(); y++) {
for (int x = 0; x < bitmap.getWidth(); x++) {
int index = y * bitmap.getHeight() + x;
int R = (pixels[index] >> 16) & 0xff; // bitwise shifting
int G = (pixels[index] >> 8) & 0xff;
int B = pixels[index] & 0xff;
// R,G.B - Red, Green, Blue
// to restore the values after RGB modification, use
// next statement
pixels[index] = 0xff000000 | (R << 16) | (G << 8) | B;
}
}
bitmap.recycle();
} catch (NullPointerException exception) {
Log.e("Error Utils",
"Photo is damaged or does not support this format!");
}
return pixels;
But, I still have only a 300*200 1d array.
Not 300*200*3 1d array!
Maybe it's that what you try to do
public static int[] getPixel(Bitmap bitmap) {
final int width = bitmap.getWidth();
final int height = bitmap.getHeight();
int[] pixelIn = new int[width * height];
bitmap.getPixels(pixelIn, 0, width, 0, 0, width, height);
bitmap.recycle();
int[] pixelOut = new int[width * height * 3];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int index = y * height + x;
int R = (pixelIn[index] >> 16) & 0xff;
int G = (pixelIn[index] >> 8) & 0xff;
int B = (pixelIn[index] >> 0) & 0xff;
int indexOut = index * 3;
pixelOut[indexOut++] = R;
pixelOut[indexOut++] = G;
pixelOut[indexOut ] = B;
}
}
return pixelOut;
}
Untested but it should create an int[] (you should consider byte[]) that is filled [R][G][B][R][G][B]...
same for bytes
public static byte[] getPixelBytes(Bitmap bitmap) {
final int width = bitmap.getWidth();
final int height = bitmap.getHeight();
final int total = width * height;
int[] pixelIn = new int[total];
bitmap.getPixels(pixelIn, 0, width, 0, 0, width, height);
bitmap.recycle();
byte[] pixelOut = new byte[total * 3];
int indexOut = 0;
for (int pixel : pixelIn) {
byte R = (byte) ((pixel >> 16) & 0xff);
byte G = (byte) ((pixel >> 8) & 0xff);
byte B = (byte) ((pixel ) & 0xff);
pixelOut[indexOut++] = R;
pixelOut[indexOut++] = G;
pixelOut[indexOut++] = B;
}
return pixelOut;
}
And to get it in three separate arrays like [R R R R][G G G G][B B B B]
public static byte[][] getPixelBytes(Bitmap bitmap) {
final int width = bitmap.getWidth();
final int height = bitmap.getHeight();
final int total = width * height;
int[] pixelIn = new int[total];
bitmap.getPixels(pixelIn, 0, width, 0, 0, width, height);
bitmap.recycle();
byte[][] result = new byte[3][total];
int index = 0;
for (int pixel : pixelIn) {
byte R = (byte) ((pixel >> 16) & 0xff);
byte G = (byte) ((pixel >> 8) & 0xff);
byte B = (byte) ((pixel ) & 0xff);
result[0][index] = R;
result[1][index] = G;
result[2][index] = B;
index++;
}
return result;
}
The rgb values of the 5th (= index 4) pixel would be
byte R = result[0][4];
byte G = result[1][4];
byte B = result[2][4];
Or to separate that into 3 arrays
byte[] rArray = result[0]; // each 0 .. (width x height - 1)
byte[] gArray = result[1];
byte[] bArray = result[2];
Also don't forget that Java's byte is -128..127, not 0..255.