I'm trying to use a filter function from OpenCV in Processing, but before I do, I have to convert my image into a mat that is 8 bits and 3 channels. But whenever I run it, I keep running into a "Buffer Overflow Exception," and I can't figure out why I get that error and how to get the conversion to work.
import gab.opencv.*;
import java.nio.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.core.Mat;
import org.opencv.core.CvType;
import org.opencv.core.Core;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferInt;
OpenCV opencv;
Imgproc imgproc;
PImage src, out;
PImage before, snap;
Mat one, two;
double a = 35.0;
double b = 20.0;
void setup() {
src = loadImage("cat.jpg");
size( 429, 360);
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
one = new Mat( width, height, CvType.CV_8UC3);
two = new Mat( width, height, CvType.CV_8UC3);
one = toMat(src);
imgproc.pyrMeanShiftFiltering( one, two, a, b);
out = toPImage(two);
}
void draw() {
image(out, 0, 0, width, height);
}
Mat toMat(PImage image) {
int w = image.width;
int h = image.height;
Mat mat = new Mat(h, w, CvType.CV_8UC3);
byte[] data8 = new byte[w*h*3];
int[] data32 = new int[w*h];
arrayCopy(image.pixels, data32);
ByteBuffer bBuf = ByteBuffer.allocate(w*h*3);
IntBuffer iBuf = bBuf.asIntBuffer();
iBuf.put(data32); // ERROR -- BufferOverflowException
bBuf.get(data8);
mat.put(0, 0, data8);
return mat;
}
PImage toPImage(Mat mat) {
int w = mat.width();
int h = mat.height();
PImage image = createImage(w, h, RGB);
byte[] data8 = new byte[w*h*3];
int[] data32 = new int[w*h];
mat.get(0, 0, data8);
ByteBuffer.wrap(data8).asIntBuffer().get(data32);
arrayCopy(data32, image.pixels);
}
Related
I'm trying to use the Mean Shift Function from OpenCV inside a program called Processing, which is a language based on Java. So far, I know that the function requires two mat and two double, [ pyrMeanShiftFiltering( Mat, Mat, Double, Double) ] and the mat needs to be 8 bits and 3 channels. But, when I run it, it only seems to work for the upper 3/4 th of the image and cuts out the rest.
Does anyone know how to get this function to run on the whole image?
sample image: cat.jpg
import gab.opencv.*;
import java.nio.*;
import org.opencv.imgproc.Imgproc;
import org.opencv.core.Mat;
import org.opencv.core.CvType;
import org.opencv.core.Core;
OpenCV opencv;
Imgproc imgproc;
PImage canny;
PImage src, out;
Mat one, two;
double a = 20.0;
double b = 10.0;
void setup() {
src = loadImage("cat.jpg");
size( 429, 360);
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
one = new Mat( width, height, CvType.CV_8UC3);
two = new Mat( width, height, CvType.CV_8UC3);
one = toMat(src);
imgproc.pyrMeanShiftFiltering( one, two, a, b);
out = toPImage(two);
}
void draw() {
image(out, 0, 0, width, height);
}
Mat toMat(PImage image) {
int w = image.width;
int h = image.height;
Mat mat = new Mat(h, w, CvType.CV_8UC3);
byte[] data8 = new byte[w*h*4];
int[] data32 = new int[w*h];
arrayCopy(image.pixels, data32);
ByteBuffer bBuf = ByteBuffer.allocate(w*h*4);
IntBuffer iBuf = bBuf.asIntBuffer();
iBuf.put(data32);
bBuf.get(data8);
mat.put(0, 0, data8);
return mat;
}
PImage toPImage(Mat mat) {
int w = mat.width();
int h = mat.height();
PImage image = createImage(w, h, ARGB);
byte[] data8 = new byte[w*h*4];
int[] data32 = new int[w*h];
mat.get(0, 0, data8);
ByteBuffer.wrap(data8).asIntBuffer().get(data32);
arrayCopy(data32, image.pixels);
return image;
}
The problem is that you are using rows where the columns should be and columns where the rows should be. Check the Mat documentation for more information.
so, whenever you have this:
new Mat( width, height, CvType.CV_8UC3);
invert the order
new Mat( height, width, CvType.CV_8UC3);
Also, I notice that you have a 4 channel images (ARGB) and the toMat function also have 4 channels, but you create a Mat with only 3 CV_8UC3, you should use 4 CV_8UC4 (not sure in java), and beware, normally opencv uses is BGRA !! so you may need to mix the channels correctly.
I hope this helps you
I'm trying to convert my image into mat that is 8 bits and 3 channels and later on back into a image in OpenCV. But my code doesn't seem to be working.
Does anyone know why this isn't working?
Mat toMat(PImage image) {
int w = image.width;
int h = image.height;
Mat mat = new Mat(h, w, CvType.CV_8UC3);
byte[] data8 = new byte[w*h*4];
int[] data32 = new int[w*h];
arrayCopy(image.pixels, data32);
ByteBuffer bBuf = ByteBuffer.allocate(w*h*4);
IntBuffer iBuf = bBuf.asIntBuffer();
iBuf.put(data32);
bBuf.get(data8);
mat.put(0, 0, data8);
return mat;
}
PImage toPImage(Mat mat) {
int w = mat.width();
int h = mat.height();
PImage image = createImage(h, w, RGB);
byte[] data8 = new byte[w*h*4];
int[] data32 = new int[w*h];
mat.get(0, 0, data8);
ByteBuffer.wrap(data8).asIntBuffer().get(data32);
arrayCopy(data32, image.pixels);
return image;
}
I am unable to successfully convert a javafx.scene.image.Image to a org.opencv.core.Mat. The resulting matrix produces a black image. I've not used PixelReader before so I am unsure wether or not I am using it correctly.
Here is my code:
public static Mat imageToMat(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] buffer = new byte[width * height * 3];
PixelReader reader = image.getPixelReader();
WritablePixelFormat format = WritablePixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, buffer, 0, 0);
Mat mat = new Mat(height, width, CvType.CV_8UC3);
mat.put(0, 0, buffer);
return mat;
}
Any help/solutions would be greatly appreciated! :) Thank you.
That stuff is still circumstantial. I've found 2 working solutions. I'll just post my OpenCvUtils class, hope it helps until someone comes up with a better solution:
import java.awt.AlphaComposite;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.ByteArrayInputStream;
import java.net.URISyntaxException;
import java.nio.file.Paths;
import javafx.embed.swing.SwingFXUtils;
import javafx.scene.image.Image;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.MatOfByte;
import org.opencv.imgcodecs.Imgcodecs;
public class OpenCvUtils {
/**
* Convert a Mat object (OpenCV) in the corresponding Image for JavaFX
*
* #param frame
* the {#link Mat} representing the current frame
* #return the {#link Image} to show
*/
public static Image mat2Image(Mat frame) {
// create a temporary buffer
MatOfByte buffer = new MatOfByte();
// encode the frame in the buffer, according to the PNG format
Imgcodecs.imencode(".png", frame, buffer);
// build and return an Image created from the image encoded in the
// buffer
return new Image(new ByteArrayInputStream(buffer.toArray()));
}
public static Mat image2Mat( Image image) {
BufferedImage bImage = SwingFXUtils.fromFXImage(image, null);
return bufferedImage2Mat( bImage);
}
// http://www.codeproject.com/Tips/752511/How-to-Convert-Mat-to-BufferedImage-Vice-Versa
public static Mat bufferedImage2Mat(BufferedImage in)
{
Mat out;
byte[] data;
int r, g, b;
int height = in.getHeight();
int width = in.getWidth();
if(in.getType() == BufferedImage.TYPE_INT_RGB || in.getType() == BufferedImage.TYPE_INT_ARGB)
{
out = new Mat(height, width, CvType.CV_8UC3);
data = new byte[height * width * (int)out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, width, height, null, 0, width);
for(int i = 0; i < dataBuff.length; i++)
{
data[i*3 + 2] = (byte) ((dataBuff[i] >> 16) & 0xFF);
data[i*3 + 1] = (byte) ((dataBuff[i] >> 8) & 0xFF);
data[i*3] = (byte) ((dataBuff[i] >> 0) & 0xFF);
}
}
else
{
out = new Mat(height, width, CvType.CV_8UC1);
data = new byte[height * width * (int)out.elemSize()];
int[] dataBuff = in.getRGB(0, 0, width, height, null, 0, width);
for(int i = 0; i < dataBuff.length; i++)
{
r = (byte) ((dataBuff[i] >> 16) & 0xFF);
g = (byte) ((dataBuff[i] >> 8) & 0xFF);
b = (byte) ((dataBuff[i] >> 0) & 0xFF);
data[i] = (byte)((0.21 * r) + (0.71 * g) + (0.07 * b)); //luminosity
}
}
out.put(0, 0, data);
return out;
}
public static String getOpenCvResource(Class<?> clazz, String path) {
try {
return Paths.get( clazz.getResource(path).toURI()).toString();
} catch (URISyntaxException e) {
throw new RuntimeException(e);
}
}
// Convert image to Mat
// alternate version http://stackoverflow.com/questions/21740729/converting-bufferedimage-to-mat-opencv-in-java
public static Mat bufferedImage2Mat_v2(BufferedImage im) {
im = toBufferedImageOfType(im, BufferedImage.TYPE_3BYTE_BGR);
// Convert INT to BYTE
//im = new BufferedImage(im.getWidth(), im.getHeight(),BufferedImage.TYPE_3BYTE_BGR);
// Convert bufferedimage to byte array
byte[] pixels = ((DataBufferByte) im.getRaster().getDataBuffer()).getData();
// Create a Matrix the same size of image
Mat image = new Mat(im.getHeight(), im.getWidth(), CvType.CV_8UC3);
// Fill Matrix with image values
image.put(0, 0, pixels);
return image;
}
private static BufferedImage toBufferedImageOfType(BufferedImage original, int type) {
if (original == null) {
throw new IllegalArgumentException("original == null");
}
// Don't convert if it already has correct type
if (original.getType() == type) {
return original;
}
// Create a buffered image
BufferedImage image = new BufferedImage(original.getWidth(), original.getHeight(), type);
// Draw the image onto the new buffer
Graphics2D g = image.createGraphics();
try {
g.setComposite(AlphaComposite.Src);
g.drawImage(original, 0, 0, null);
}
finally {
g.dispose();
}
return image;
}
}
Thanks to Nikos Paraskevopoulos for suggesting setting the scanlineStride parameter of the PixelReader::getPixels() method, this has solved it. :)
Working code below:
public static Mat imageToMat(Image image) {
int width = (int) image.getWidth();
int height = (int) image.getHeight();
byte[] buffer = new byte[width * height * 4];
PixelReader reader = image.getPixelReader();
WritablePixelFormat<ByteBuffer> format = WritablePixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, buffer, 0, width * 4);
Mat mat = new Mat(height, width, CvType.CV_8UC4);
mat.put(0, 0, buffer);
return mat;
}
You need convert : Mat > BufferedImage > FXImage
private Image mat2Image(Mat src)
{
BufferedImage image = ImageConverter.toImage(src);
return SwingFXUtils.toFXImage(image, null);
}
Class:
public class ImageConverter {
/**
* Converts/writes a Mat into a BufferedImage.
*
* #param src Mat of type CV_8UC3 or CV_8UC1
* #return BufferedImage of type TYPE_3BYTE_BGR or TYPE_BYTE_GRAY
*/
public static BufferedImage toImage(Mat src)
{
if ( src != null ) {
int cols = src.cols();
int rows = src.rows();
int elemSize = (int)src.elemSize();
byte[] data = new byte[cols * rows * elemSize];
int type;
src.data().get(data);
switch (src.channels()) {
case 1:
type = BufferedImage.TYPE_BYTE_GRAY;
break;
case 3:
type = BufferedImage.TYPE_3BYTE_BGR;
// bgr to rgb
byte b;
for(int i=0; i<data.length; i=i+3) {
b = data[i];
data[i] = data[i+2];
data[i+2] = b;
}
break;
default:
return null;
}
BufferedImage bimg = new BufferedImage(cols, rows, type);
bimg.getRaster().setDataElements(0, 0, cols, rows, data);
return bimg;
}
return null;
}
}
Following the solution above it may be also necessary to convert the format from four bytes (CvType.CV_8UC4) to three bytes (CvType.CV_8UC3) depending on what you are finally seeking. For example, if I read a xx.jpa image, it is RGB format.
if (isRGB)
Imgproc.cvtColor(mat,mat,Imgproc.COLOR_RGBA2RGB);
//or...COLOR_BGR2RGB,COLOR_BGRA2RGB,COLOR_BGR2BGRA
How do I flip an Screenshot image? I can't find my problem anywhere else.Example code:
/*
*#param fileLoc //Location of fileoutput destination
*#param format //"png"
*#param WIDTH //Display.width();
*#param HEIGHT //Display.height();
*/
private void getScreenImage(){
int[] pixels = new int[WIDTH * HEIGHT];
int bindex;
// allocate space for RBG pixels
ByteBuffer fb = ByteBuffer.allocateDirect(WIDTH * HEIGHT * 3);//.order(ByteOrder.nativeOrder());
// grab a copy of the current frame contents as RGB
glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, fb);
BufferedImage image = new BufferedImage(WIDTH, HEIGHT,BufferedImage.TYPE_INT_RGB);
// convert RGB data in ByteBuffer to integer array
for (int i=0; i < pixels.length; i++) {
bindex = i * 3;
pixels[i] =
((fb.get(bindex) << 16)) +
((fb.get(bindex+1) << 8)) +
((fb.get(bindex+2) << 0));
}
try {
//Create a BufferedImage with the RGB pixels then save as PNG
image.setRGB(0, 0, WIDTH, HEIGHT, pixels, 0 , WIDTH);
ImageIO.write(image, format , fileLoc);
}
catch (Exception e) {
System.out.println("ScreenShot() exception: " +e);
}
}
Basically the code works for capturing the screen and storing at as "png" format.
But it output's the image horizontally flipped, because glReadPixels();,
read from bottom-left to top-right.
So how do I flip the image horizontally before I ImageIO.write();?
Thanks in-front,
Rose.
E.G. of flipping an image horizontally using an AffineTransform.
import java.awt.*;
import java.awt.geom.AffineTransform;
import java.awt.image.BufferedImage;
import javax.swing.*;
public class Test001 {
public static BufferedImage getFlippedImage(BufferedImage bi) {
BufferedImage flipped = new BufferedImage(
bi.getWidth(),
bi.getHeight(),
bi.getType());
AffineTransform tran = AffineTransform.getTranslateInstance(bi.getWidth(), 0);
AffineTransform flip = AffineTransform.getScaleInstance(-1d, 1d);
tran.concatenate(flip);
Graphics2D g = flipped.createGraphics();
g.setTransform(tran);
g.drawImage(bi, 0, 0, null);
g.dispose();
return flipped;
}
Test001(BufferedImage bi) {
JPanel gui = new JPanel(new GridLayout(1,2,2,2));
gui.add(new JLabel(new ImageIcon(bi)));
gui.add(new JLabel(new ImageIcon(getFlippedImage(bi))));
JOptionPane.showMessageDialog(null, gui);
}
public static void main(String[] args) throws AWTException {
final Robot robot = new Robot();
Runnable r = new Runnable() {
#Override
public void run() {
final BufferedImage bi = robot.createScreenCapture(
new Rectangle(0, 360, 200, 100));
new Test001(bi);
}
};
SwingUtilities.invokeLater(r);
}
}
It's worth noting that it might be faster to simply read the pixels out of the buffer in the order you want them, rather than read them backwards and do a costly transform operation. Additionally, since you know for sure that the BufferedImage is TYPE_INT_RGB it should be safe to write directly into its raster.
ByteBuffer fb = BufferUtils.createByteBuffer(WIDTH * HEIGHT * 3);
BufferedImage image = new BufferedImage(WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB);
glReadPixels(0, 0, WIDTH, HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, fb);
int[] pixels = ((DataBufferInt) image.getRaster().getDataBuffer()).getData();
for (int i = pixels.length - 1; i >= 0; i--) {
int x = i % WIDTH, y = i / WIDTH * WIDTH;
pixels[y + WIDTH - 1 - x] = (fb.get() & 0xff) << 16 | (fb.get() & 0xff) << 8 | fb.get() & 0xff;
}
I'm loading an image using C++ and feeding the pixels to JNI via a ByteBuffer. I know the pixels are being fed just fine because if the images are square, they render perfectly fine. If they are rectangular, they get distorted. I've also saved the Image back successfully in the DLL and it works. Java unfortunately gave up on me (unless it's square-like). I cannot figure out why! What am I doing wrong?
package library;
import java.awt.image.BufferedImage;
import javax.swing.*;
public class Frame extends JFrame {
public Frame(int Width, int Height, String FrameName, BufferedImage Buffer) {
setName(FrameName);
setSize(Width, Height);
getContentPane().add(new JLabel(new ImageIcon(Buffer)));
setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
setVisible(true);
}
}
All the loading:
package library;
import java.awt.image.BufferedImage;
import java.awt.image.WritableRaster;
import java.io.IOException;
import java.nio.ByteBuffer;
class SharedLibrary {
static{System.loadLibrary("TestDLL");}
private static native void GetGLBuffer(ByteBuffer Buffer);
private ByteBuffer Buffer = null;
private int ByteSize = 0, Width = 0, Height = 0, BitsPerPixel = 32;
public SharedLibrary(int ImageWidth, int ImageHeight) throws IOException {
Width = ImageWidth;
Height = ImageHeight;
ByteSize = ((Width * BitsPerPixel + 31) / 32) * 4 * Height; //Compute Image Size in Bytes.
Buffer = ByteBuffer.allocateDirect(ByteSize); //Allocate Space for the image data.
GetGLBuffer(Buffer); //Fill the buffer with Image data from the DLL.
byte[] Bytes = new byte[ByteSize];
Buffer.get(Bytes);
BufferedImage Image = new BufferedImage(Width, Height, BufferedImage.TYPE_3BYTE_BGR);
WritableRaster raster = (WritableRaster) Image.getData();
raster.setPixels(0, 0, Width, Height, ByteBufferToIntBuffer(Bytes));
Image.setData(raster);
Frame F = new Frame(Width, Height, "", Image);
}
private int[] ByteBufferToIntBuffer(byte[] Data) {
int IntBuffer[] = new int[Data.length];
for (int I = 0; I < Data.length; I++) {
IntBuffer[I] = (int)Data[I] & 0xFF;
}
return IntBuffer;
}
}
The above Image Gets drawn perfectly because it is almost square. If I resize it to a rectangle, it gets distorted. Example:
Gets distorted and looks like: