I am working on Android development where once I get byte array from Google Glass frame, I am trying to scan array using Zxing library and trying to detect 1d barcode(UPC code).
I have tried this code snippet.
BufferedImage image = ImageIO.read(game);
BufferedImageLuminanceSource bils = new BufferedImageLuminanceSource(image);
HybridBinarizer hb = new HybridBinarizer(bils);
BitMatrix bm = **hb.getBlackMatrix();**
MultiDetector detector = new MultiDetector(bm);
DetectorResult dResult = detector.detect();
if(dResult == null)
{
System.out.println("Image does not contain any barcode");
}
else
{
BitMatrix QRImageData = dResult.getBits();
Decoder decoder = new Decoder();
DecoderResult decoderResult = decoder.decode(QRImageData);
String QRString = decoderResult.getText();
System.out.println(QRString);
}
It works fine for QRcode, detects and decodes QR code well. But does not detect UPC code.
I also tried this code snippet,
InputStream barCodeInputStream = new FileInputStream(game);
BufferedImage barCodeBufferedImage = ImageIO.read(barCodeInputStream);
BufferedImage image = ImageIO.read(game);
LuminanceSource source = new BufferedImageLuminanceSource(image);
BinaryBitmap bitmap = new BinaryBitmap(new GlobalHistogramBinarizer(source));
RSSExpandedReader rssExpandedReader = new RSSExpandedReader();
int rowNumber = bitmap.getHeight()/2;
BitArray row = **bitmap.getBlackRow(0, null);**
Result theResult = rssExpandedReader.decodeRow(rowNumber, row, new Hashtable());
and in both I am getting "Exception in thread "main" com.google.zxing.NotFoundException".
Does anyone know how to fix this issue?
getBlackMatrix() -
Converts a 2D array of luminance data to 1 bit. As above, assume this method is expensive and do not call it repeatedly. This method is intended for decoding 2D barcodes and may or may not apply sharpening. Therefore, a row from this matrix may not be identical to one fetched using getBlackRow(), so don't mix and match between them.
getBlackRow()-
Converts one row of luminance data to 1 bit data. May actually do the conversion, or return cached data. Callers should assume this method is expensive and call it as seldom as possible. This method is intended for decoding 1D barcodes and may choose to apply sharpening.
Related
When I read a PNG image in Java using javax.imageio.ImageIO.read(), the resulting BufferedImage is of TYPE_3BYTE_BGR or TYPE_4BYTE_ABGR depending on transparency.
I'm processing very large images (64+ megapixels), and need them in TYPE_INT_RGB / TYPE_INT_ARGB format, which requires an expensive and very-large-chunk-of-memory hogging repainting of the image onto a new image in the correct format, which is causing OOMs.
It would be much better if I could somehow persuade ImageIO to read the image in the desired format from the get-go - is there any way of doing that? Thanks!
Yes, it is possible to read into a predefined type of BufferedImage, given that the type is supported by and compatible with the reader plugin. Most often the TYPE_#BYTE_* types are compatible with the TYPE_INT_* types, and this is the case for the standard PNGImageReader.
To make it work, you need access to the ImageReader directly, and use the read method that takes an ImageReadParam to control the type of image. It's possible to read into a pre-allocated image by using the ImageReadParam.setDestination(..) method, or to just specify the type of image and let the reader plugin allocate it for you by using ImageReadParam.setDestinationType(..) like I will show below.
Here's a short stand-alone code sample that shows how to read into a specific image type:
public static void main(String[] args) throws IOException {
File input = new File(args[0]);
try (ImageInputStream stream = ImageIO.createImageInputStream(input)) {
// Find a suitable reader
Iterator<ImageReader> readers = ImageIO.getImageReaders(stream);
if (!readers.hasNext()) {
throw new IIOException("No reader for " + input);
}
ImageReader reader = readers.next();
try {
reader.setInput(stream);
// Query the reader for types and select the best match
ImageTypeSpecifier intPackedType = getIntPackedType(reader);
System.out.println("intPackedType = " + intPackedType);
// Pass the type to the reader using read param
ImageReadParam param = reader.getDefaultReadParam();
param.setDestinationType(intPackedType);
// Finally read the image
BufferedImage image = reader.read(0, param);
System.out.println("image = " + image);
}
finally {
reader.dispose();
}
}
}
private static ImageTypeSpecifier getIntPackedType(ImageReader reader) throws IOException {
Iterator<ImageTypeSpecifier> types = reader.getImageTypes(0);
while (types.hasNext()) {
ImageTypeSpecifier spec = types.next();
switch (spec.getBufferedImageType()) {
case BufferedImage.TYPE_INT_RGB:
case BufferedImage.TYPE_INT_ARGB:
return spec;
default:
// continue searching
}
}
return null;
}
Sample output from one of my runs using a PNG as input:
intPackedType = javax.imageio.ImageTypeSpecifier$Packed#707084ba
image = BufferedImage#45ff54e6: type = 2 DirectColorModel: rmask=ff0000 gmask=ff00 bmask=ff amask=ff000000 IntegerInterleavedRaster: width = 100 height = 100 #Bands = 4 xOff = 0 yOff = 0 dataOffset[0] 0
Where type = 2 means BufferedImage.TYPE_INT_ARGB.
I want to save the depth info from the arcore to the storage.
Here is the example from the developer guide.
public int getMillimetersDepth(Image depthImage, int x, int y) {
// The depth image has a single plane, which stores depth for each
// pixel as 16-bit unsigned integers.
Image.Plane plane = depthImage.getPlanes()[0];
int byteIndex = x * plane.getPixelStride() + y * plane.getRowStride();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
short depthSample = buffer.getShort(byteIndex);
return depthSample;
}
So I want to save this bytebuffer into a local file, but my output txt file is not readable. How could I fix this?
Here is what I have
Image depthImage = frame.acquireDepthImage();
Image.Plane plane = depthImage.getPlanes()[0];
int format = depthImage.getFormat();
ByteBuffer buffer = plane.getBuffer().order(ByteOrder.nativeOrder());
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
File mypath=new File(super.getExternalFilesDir("depDir"),Long.toString(lastPointCloudTimestamp)+".txt");
FileChannel fc = new FileOutputStream(mypath).getChannel();
fc.write(buffer);
fc.close();
depthImage.close();
I tried to decode them with
String s = new String(data, "UTF-8");
System.out.println(StandardCharsets.UTF-8.decode(buffer).toString());
but the output is still strange like this
.03579:<=>#ABCDFGHJKMNOPRQSUWY]_b
In order to obtain the depth data provided by the ARCore session, you need to write bytes into your local file. A Buffer object is a container, it countains a finite sequence of elements of a specific primitive type (here bytes for a ByteBuffer). So what you need to write inside your file is your data variable that corresponds to the information previously stored in the buffer (according to buffer.get(data)).
It works fine for me, I managed to draw the provided depth map within a python code (but the idea behind the following code can be easily adapted to a java code):
depthData = np.fromfile('depthdata.txt', dtype = np.uint16)
H = 120
W = 160
def extractDepth(x):
depthConfidence = (x >> 13) & 0x7
if (depthConfidence > 6): return 0
return x & 0x1FFF
depthMap = np.array([extractDepth(x) for x in depthData]).reshape(H,W)
depthMap = cv.rotate(depthMap, cv.ROTATE_90_CLOCKWISE)
For further details, read the information concerning the format of the depthmap (DEPTH16) here: https://developer.android.com/reference/android/graphics/ImageFormat#DEPTH16
You must also be aware that the depthmap resolution is set to 160x120 pixels and is oriented according to a landscape format.
Make also sure to surround your code by a try/catch code bloc in case of a IOException error.
I'm trying to hide a message inside a .gif for a steganography project.
I've converted the input gif to an ArrayList of BufferedImages ana applied my steganography algorithm.
But, i came across an issue with converting the ArrayList of BufferedImages back to a .gif.
I used this GifSequenceWriter class to convert the BufferedImages array to a new .gif after getting the original delay between frames from the original gif image metadata.
File encoded_img = new File("output.gif");
ImageOutputStream output = new FileImageOutputStream(encoded_img);
GifSequenceWriter writer = new GifSequenceWriter(output, frames.get(0).getType(), delayTimeMS, true);
writer.writeToSequence(frames.get(0));
for(int k=1; k<frames.size()-1; k++) {
writer.writeToSequence(frames.get(k));
}
writer.close();
output.close();
But, the resulting .gif looks really bad, and i've saved the individual frames with and without the steganography algorithm and they look fine. You can check out an example of the original image, the 10 saved frames and the resulting .gif here.
Is there a better way to create .gifs in java?
Thanks in advance.
There's a problem with the the GifSequenceWriter when using palette images (BufferedImage.TYPE_BYTE_INDEXED with IndexColorModel). This will create metadata based on a default 216 color palette (the web safe palette), which is clearly different from the colors in your image.
The problematic lines in GifSequenceWriter:
ImageTypeSpecifier imageTypeSpecifier = ImageTypeSpecifier.createFromBufferedImageType(imageType);
imageMetaData = gifWriter.getDefaultImageMetadata(imageTypeSpecifier, imageWriteParam);
Instead, the metadata should be based on the color palette in the index color model of your image. But the good news is, it works fine without it.
You can simply use:
GifSequenceWriter writer = new GifSequenceWriter(output, BufferedImage.TYPE_INT_ARGB, delayTimeMS, true);
...and the writer will automatically create the palette as needed, from your actual image data.
It's also possible to fix the GifSequenceWriter, to accept an ImageTypeSpecifier instead of the int imageType, however, this will only work if all frames use the same palette, I think:
public GifSequenceWriter(
ImageOutputStream outputStream,
ImageTypeSpecifier imageTypeSpecifier,
int timeBetweenFramesMS,
boolean loopContinuously) throws IIOException, IOException {
// my method to create a writer
gifWriter = getWriter();
imageWriteParam = gifWriter.getDefaultWriteParam();
imageMetaData = gifWriter.getDefaultImageMetadata(imageTypeSpecifier, imageWriteParam);
// ... rest of the method unchanged.
Usage:
ColorModel cm = firstImage.getColorModel();
ImageTypeSpecifier imageType = new ImageTypeSpecifier(cm, cm.createCompatibleSampleModel(1, 1));
GifSequenceWriter writer = new GifSequenceWriter(output, imageType, delayTimeMS, true);
I am trying to read 2D Data matrix barcode using zxing library(GenericMultipleBarcodeReader). I have multiple barcodes on a single image.
The problem is that the efficiency of the zing reader is very low, it
recognizes 1 barcode from image 1.png and no barcode from image 2.png which has 48 barcodes. Is there
any way to get 100% efficiency or any other library which results 100%
My code to read barcode is:
public static void main(String[] args) throws Exception {
BufferedImage image = ImageIO.read(new File("1.png"));
if (image != null) {
LuminanceSource source = new BufferedImageLuminanceSource(image);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
DataMatrixReader dataMatrixReader = new DataMatrixReader();
Hashtable<DecodeHintType, Object> hints = new Hashtable<DecodeHintType, Object>();
hints.put(DecodeHintType.TRY_HARDER, Boolean.TRUE);
GenericMultipleBarcodeReader reader = new GenericMultipleBarcodeReader(
dataMatrixReader);
Result[] results = reader.decodeMultiple(bitmap, hints);
for (Result result : results) {
System.out.println(result.toString());
}
}
}
And images I used are:
Please help to resolve this issue.
Thanks
It doesn't quite work this way. It will not read barcodes in a grid, as it makes an assumption that it can cut up the image in a certain way that won't be compatible with grids. You will have to write your own method to cut up the image into scannable regions.
It is also the case that the Data Matrix decoder assumes the center of the image is inside the barcode. This is another reason you need to pre-chop the image into squares around the cylinders and then scan. It ought to work fairly well then.
An alternative solution is to consider a barcode engine that can detect multiple barcodes in various orientations on one document. If you're running on Windows, the ClearImage Barcode SDK has a Java API and should be able to handle your needs without pre-processing. You can test if their engine can read your image using their Online Barcode Reader.
Some sample code:
public static void testDataMatrix () {
try {
String filename = "1.png ";
CiServer objCi = new CiServer();
Ci = objCi.getICiServer();
ICiDataMatrix reader = Ci.CreateDataMatrix(); // read DataMatrix Barcode
reader.getImage().Open(filename, 1);
int n = reader.Find(0); // find all the barcodes in the doc
for (i = 1; i <= n; i++) {
ICiBarcode Bc = reader.getBarcodes().getItem(i); // getItem is 1-based
System.out.println("Barcode " + i + " has Text: " + Bc.getText());
}
} catch (Exception ex) {System.out.println(ex.getMessage());}
}
Disclaimer: I've done some work for Inlite in the past.
-Edit-
FYI.. I am converting b&w documents scanned in as greyscale or color.
1)The first solution worked, it just reversed black & white (black background, white text). It also took nearly 10 minutes.
2)The JAI solution in the 2nd answer didn't work for me. I tried it before posting here.
Has anyone worked with other libraries free or pay that handle image manipulation well?
-Original-
I am trying to convert an PNG to a bitonal TIFF using Java ImageIO. Has anyone had any luck doing this? I have got it to convert from PNG to TIFF. I am not sure if I need to convert the BufferedImage (PNG) that I read in or convert on the TIFF as I write it. I have searched and searched but nothing seems to work? Does anyone have an suggestions where to look?
Here is the code that converts...
public static void test() throws IOException {
String fileName = "4848970_1";
// String fileName = "color";
String inFileType = ".PNG";
String outFileType = ".TIFF";
File fInputFile = new File("I:/HPF/UU/" + fileName + inFileType);
InputStream fis = new BufferedInputStream(new FileInputStream(fInputFile));
ImageReaderSpi spi = new PNGImageReaderSpi();
ImageReader reader = spi.createReaderInstance();
ImageInputStream iis = ImageIO.createImageInputStream(fis);
reader.setInput(iis, true);
BufferedImage bi = reader.read(0);
int[] xi = bi.getSampleModel().getSampleSize();
for (int i : xi) {
System.out.println("bitsize " + i);
}
ImageWriterSpi tiffspi = new TIFFImageWriterSpi();
TIFFImageWriter writer = (TIFFImageWriter) tiffspi.createWriterInstance();
// TIFFImageWriteParam param = (TIFFImageWriteParam) writer.getDefaultWriteParam();
TIFFImageWriteParam param = new TIFFImageWriteParam(Locale.US);
String[] strings = param.getCompressionTypes();
for (String string : strings) {
System.out.println(string);
}
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionType("LZW");
File fOutputFile = new File("I:\\HPF\\UU\\" + fileName + outFileType);
OutputStream fos = new BufferedOutputStream(new FileOutputStream(fOutputFile));
ImageOutputStream ios = ImageIO.createImageOutputStream(fos);
writer.setOutput(ios);
writer.write(null, new IIOImage(bi, null, null), param);
ios.flush();
writer.dispose();
ios.close();
}
I have tried changing the compression to type "CCITT T.6" as that appears to be what I want, but I get an error " Bits per sample must be 1 for T6 compression! " Any suggestion would be appreciated.
Most likely, you want something like this to convert to 1 bit before you save to TIFF with CCITT compression.
To expound a little bit - be aware that converting from other bit depths to 1 bit is non-trivial. You are doing a data reduction operation and there are dozens of domain specific solutions which vary greatly in output quality (blind threshold, adaptive threshold, dithering, local threshold, global threshold and so on). None of them are particularly good at all image types (adaptive threshold is pretty good for documents, but lousy for photographs, for example).
As plinth said, you have to do the conversion, Java won't do it magically for you...
If the PNG image is already black & white (as it seems, looking at your comment), using a threshold is probably the best solution.
Somebody seems to have the same problem: HELP: how to compress the tiff. A solution is given on the thread (untested!).