I'm decoding an Qr Code that i'm getting throw the Java JSANE API. when i'm writing the image i got from the JSANE API in a file before reading it with IMAGEIO.read(), the decodong process is working but when i'm using the ilage Object given from the JSANE API directly without writing it in a file before reading, i' getting the following error :
Exception in thread "main" com.google.zxing.NotFoundException
The method i'm using are:
public String decode_QrCode(String filePath, String charset) throws FileNotFoundException, IOException, NotFoundException{
System.out.println("ICI ERREUR");
BinaryBitmap binaryBitmap = new BinaryBitmap(
new HybridBinarizer(
new BufferedImageLuminanceSource(
ImageIO.read(new FileInputStream(filePath))
)
)
);
Result qrCodeResult = new MultiFormatReader().decode(binaryBitmap);
return qrCodeResult.getText();
}
public String decode_QrCode_buffImg(BufferedImage bufferedImage) throws NotFoundException{
LuminanceSource source = new BufferedImageLuminanceSource(bufferedImage);
BinaryBitmap binaryBitmap = new BinaryBitmap(
new HybridBinarizer(
source
)
);
Result qrCodeResult = new MultiFormatReader().decode(binaryBitmap);
return qrCodeResult.getText();
}
I'm getting the image from the JSANE API like this:
Image image = dialog.openDialog();
I'm converting the Image to BufferedImage like this:
public static BufferedImage toBufferedImage(Image image)
{
if (image instanceof BufferedImage)
return (BufferedImage)image;
// This code ensures that all the pixels in the image are loaded
image = new ImageIcon(image).getImage();
// Determine if the image has transparent pixels
boolean hasAlpha = hasAlpha(image);
// Create a buffered image with a format that's compatible with the screen
BufferedImage bimage = null;
GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
try {
// Determine the type of transparency of the new buffered image
int transparency = Transparency.OPAQUE;
if (hasAlpha == true)
transparency = Transparency.BITMASK;
// Create the buffered image
GraphicsDevice gs = ge.getDefaultScreenDevice();
GraphicsConfiguration gc = gs.getDefaultConfiguration();
bimage = gc.createCompatibleImage(image.getWidth(null), image.getHeight(null), transparency);
} catch (HeadlessException e) { } //No screen
if (bimage == null) {
// Create a buffered image using the default color model
int type = BufferedImage.TYPE_INT_RGB;
if (hasAlpha == true) {type = BufferedImage.TYPE_INT_ARGB;}
bimage = new BufferedImage(image.getWidth(null), image.getHeight(null), type);
}
// Copy image to buffered image
Graphics g = bimage.createGraphics();
// Paint the image onto the buffered image
g.drawImage(image, 0, 0, null);
g.dispose();
return bimage;
}
This the main function of the class:
public static void main(String[] args) throws IOException, cf, NotFoundException {
Frame frame = null;
// TODO Auto-generated method stub
JSaneDialog dialog = new JSaneDialog( JSaneDialog.CP_START_SANED_LOCALHOST,
frame, "JSaneDialog", true, null);
Image image = dialog.openDialog();
BufferedImage buffImg = toBufferedImage(image);
//BufferedImage buff = resize(buffImg, BufferedImage.TYPE_INT_RGB, buffImg.getWidth()/2, buffImg.getHeight()/2, 0.5, 0.5);
ImageIO.write(buffImg, "png", new File("/home/michaelyamsi/Bureau/Mémoires/test/test.png"));
QrCode New_Qr = new QrCode();
System.out.println("QR Code decompressed : " + New_Qr.decode_QrCode("/home/michaelyamsi/Bureau/Mémoires/test/test.png", "UTF-8"));
System.out.println("QR Code decompressed : " + New_Qr.decode_QrCode_buffImg(buffImg));
}
I'have even tried to resize th BufferedImage before using it but when i'm doing that, even the decode from the resultant file is not working.
Related
I have an XR app, where display shows the camera (rear) feed. As such, capturing the screen is pretty much the same as capturing the camera feed...
As such, I take screenshots (Bitmaps) and then try to detect faces within them using Googles MLKit.
I'm following the official guide to detect faces.
To do this, I first init my face detector:
FaceDetector detector;
public MyFaceDetector(){
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
detector = FaceDetection.getClient(realTimeOpts);
}
I then have a function which passes in a bitmap. I first convert the bitmap to a byte array. I do this because InputImage.fromBitmap is very slow, and MLKit actually tells me that I should use a byte array:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream .toByteArray();
Next I make a mutable copy of the Bitmap (so that I can draw onto it), and set up a Canvas object, along with a color that will be used when drawing on to the Bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, options);
Canvas canvas = new Canvas(bmp);
Paint p = new Paint();
p.setColor(Color.RED);
After all is set up, I create an InputImage (used by the FaceDetector), using the byte array:
InputImage image = InputImage.fromByteArray(byteArray, bmp.getWidth(), bmp.getHeight(),0, InputImage.IMAGE_FORMAT_NV21);
Note the image format... There is a InputImage.IMAGE_FORMAT_BITMAP, but using this throws an IllegalArgumentException. Anyway, I next try to process the Bitmap, detect faces, fill each detected face with the color defined earlier, and then save the Bitmap to disk:
Task<List<Face>> result = detector.process(image).addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(List<Face> faces) {
Log.e("FACE DETECTION APP", "NUMBER OF FACES: " + faces.size());
Thread processor = new Thread(new Runnable() {
#Override
public void run() {
for (Face face : faces) {
Rect destinationRect = face.getBoundingBox();
canvas.drawRect(destinationRect, p);
canvas.save();
Log.e("FACE DETECTION APP", "WE GOT SOME FACCES!!!");
}
File file = new File(someFilePath);
try {
FileOutputStream fOut = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
processor.start();
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
}
While this code runs (i.e. no exceptions) and the bitmap is correctly written to disk, no faces are ever detected (faces.size() is always 0). I've tried rotating the image. I've tried changing the quality of the Bitmap. I've tried with and without the thread to process any detected faces. I've tried everything I can think of.
Anyone have any ideas?
ML Kit InputImage. fromByteArray only support yv12 and nv21 formats. You will need to convert the bitmap to one of these formats in order for ML kit pipeline to process. Also, if the original image you have is a bitmap, you can probably just use InputImage.fromBitmap to construct an InputImage. It shouldn't be slower than your current approach.
I was having the same issue use ImageInput.fromMediaImage(..., ...)
override fun analyze(image: ImageProxy) {
val mediaImage: Image = image.image.takeIf { it != null } ?: run {
image.close()
return
}
val inputImage = InputImage.fromMediaImage(mediaImage, image.imageInfo.rotationDegrees)
// TODO: Your ML Code
}
Check here for more details
https://developers.google.com/ml-kit/vision/image-labeling/android
Let's say that I want to load an shp file, do my stuff on it and save the map as an image.
In order to save an image I am using:
public void saveImage(final MapContent map, final String file, final int imageWidth) {
GTRenderer renderer = new StreamingRenderer();
renderer.setMapContent(map);
Rectangle imageBounds = null;
ReferencedEnvelope mapBounds = null;
try {
mapBounds = map.getMaxBounds();
double heightToWidth = mapBounds.getSpan(1) / mapBounds.getSpan(0);
imageBounds = new Rectangle(0, 0, imageWidth, (int) Math.round(imageWidth * heightToWidth));
} catch (Exception e) {
// Failed to access map layers
throw new RuntimeException(e);
}
BufferedImage image = new BufferedImage(imageBounds.width, imageBounds.height, BufferedImage.TYPE_INT_RGB);
Graphics2D gr = image.createGraphics();
gr.setPaint(Color.WHITE);
gr.fill(imageBounds);
try {
renderer.paint(gr, imageBounds, mapBounds);
File fileToSave = new File(file);
ImageIO.write(image, "png", fileToSave);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
But, let's say I am doing something like this:
...
MapContent map = new MapContent();
map.setTitle("TEST");
map.addLayer(layer);
map.addLayer(shpLayer);
// zoom into the line
MapViewport viewport = new MapViewport(featureCollection.getBounds());
map.setViewport(viewport);
saveImage(map, "/tmp/img.png", 800);
1) The problem is that the zoom level isn't saved on the image file.Is there a way to save it?
2) When I am doing MapViewport(featureCollection.getBounds()); is there a way to extend a little bit the boundaries in order to have a better visual representation?
...
The reason that you aren't saving the map at the current zoom level is that in your saveImage method you have the line:
mapBounds = map.getMaxBounds();
which always uses the full extent of the map, you can change this to
mapBounds = map.getViewport().getBounds();
You can expand a bounding box by something like:
ReferencedEnvelope bounds = featureCollection.getBounds();
double delta = bounds.getWidth()/20.0; //5% on each side
bounds.expandBy(delta );
MapViewport viewport = new MapViewport(bounds);
map.setViewport(viewport );
A quicker (and easier) way to save a map from the GUI is to use a method like this which just saves exactly what is on the screen:
public void drawMapToImage(File outputFile, String outputType,
JMapPane mapPane) {
ImageOutputStream outputImageFile = null;
FileOutputStream fileOutputStream = null;
try {
fileOutputStream = new FileOutputStream(outputFile);
outputImageFile = ImageIO.createImageOutputStream(fileOutputStream);
RenderedImage bufferedImage = mapPane.getBaseImage();
ImageIO.write(bufferedImage, outputType, outputImageFile);
} catch (IOException ex) {
ex.printStackTrace();
} finally {
try {
if (outputImageFile != null) {
outputImageFile.flush();
outputImageFile.close();
fileOutputStream.flush();
fileOutputStream.close();
}
} catch (IOException e) {// don't care now
}
}
}
I'm having some difficulty with regards to placing the contents of a Canvas into a Bitmap. When I attempt to do this, the file gets written with a file size of around 5.80KB but it appears to be completely empty (every pixel is '#000').
The canvas draws a series of interconnected lines that are formed by handwriting. Below is my onDraw for the View. (I'm aware that it's blocking the UI thread / bad practices/ etc.., however I just need to get it working)
Thank you.
#Override
protected void onDraw(Canvas canvas) {
// TODO Auto-generated method stub
super.onDraw(canvas);
if (IsTouchDown) {
// Calculate the points
Path currentPath = new Path();
boolean IsFirst = true;
for(Point point : currentPoints){
if(IsFirst){
IsFirst = false;
currentPath.moveTo(point.x, point.y);
} else {
currentPath.lineTo(point.x, point.y);
}
}
// Draw the path of points
canvas.drawPath(currentPath, pen);
// Attempt to make the bitmap and write it to a file.
Bitmap toDisk = null;
try {
// TODO: Get the size of the canvas, replace the 640, 480
toDisk = Bitmap.createBitmap(640,480,Bitmap.Config.ARGB_8888);
canvas.setBitmap(toDisk);
toDisk.compress(Bitmap.CompressFormat.JPEG, 100, new FileOutputStream(new File("arun.jpg")));
} catch (Exception ex) {
}
} else {
// Clear the points
currentPoints.clear();
}
}
I had similar problem and i've got solution. Here full code of a task /don't forget about android.permission.WRITE_EXTERNAL_STORAGE permission in manifest/
public Bitmap saveSignature(){
Bitmap bitmap = Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
this.draw(canvas);
File file = new File(Environment.getExternalStorageDirectory() + "/sign.png");
try {
bitmap.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(file));
} catch (Exception e) {
e.printStackTrace();
}
return bitmap;
}
first create a blank bitmap , then create a canvas with that blank bitmap
Bitmap.Config conf = Bitmap.Config.ARGB_8888;
Bitmap bitmap_object = Bitmap.createBitmap(width, height, conf);
Canvas canvas = new Canvas(bitmap_object);
now draw your lines on canvas
Path currentPath = new Path();
boolean IsFirst = true;
for(Point point : currentPoints){
if(IsFirst){
IsFirst = false;
currentPath.moveTo(point.x, point.y);
} else {
currentPath.lineTo(point.x, point.y);
}
}
// Draw the path of points
canvas.drawPath(currentPath, pen);
Now access your bitmap via bitmap_object
You'll have to draw after setting the bitmap to the canvas. Also use a new Canvas object like this:
Canvas canvas = new Canvas(toDisk);
canvas.drawPath(currentPath, pen);
toDisk.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(new File("arun.png")));
I recommend using PNG for saving images of paths.
you must call canvas.setBitmap(bitmap); before drawing anything on Canvas. After calling canvas.setBitmap(bitmap); draw on Canvas and then save the Bitmap you passed to Canvas.
May be
canvas.setBitmap(toDisk);
is not in correct place.
Try this :
toDisk = Bitmap.createBitmap(640,480,Bitmap.Config.ARGB_8888);
toDisk.compress(Bitmap.CompressFormat.JPEG, 100, new FileOutputStream(new File("arun.jpg")));
canvas.setBitmap(toDisk);
I got a BufferedImage and want to rescale it before saving it as an jpg/png.
I got the following code:
private BufferedImage rescaleTo(BufferedImage img,int minWidth,int minHeight) {
BufferedImage buf = toBufferedImage(img.getScaledInstance(minWidth, minHeight, Image.SCALE_DEFAULT));
BufferedImage ret = new BufferedImage(buf.getWidth(null),buf.getHeight(null),BufferedImage.TYPE_INT_ARGB);
return ret;
}
public BufferedImage toBufferedImage(Image img) {
BufferedImage ret = new BufferedImage(img.getWidth(null),img.getHeight(null),BufferedImage.TYPE_INT_RGB);
Graphics2D g2 = ret.createGraphics();
g2.drawImage(img,0,0,null);
return ret;
}
public String saveTo(BufferedImage image,String URI) throws UtilityException {
try {
if(image == null)
System.out.println("dododod");
ImageIO.write(image, _type, new File(URI));
} catch (IOException e) {
throw new UtilityException(e.getLocalizedMessage());
}
return URI;
}
But as an result I just get a black picture. It must have to do with the rescaling as when I skip it I can save the expected picture.
As a test, set _type="png" and also use file extension .png when you make the call to ImageIO.write(image, _type, new File(URI));. I had issues like you describe and I started writing type PNG and all works fine. Unfortunately, I never went back to debug why I could not write type JPG, GIF etc.
I'd like to display a svg image in javafx 2.0, but I don't find such a thing in the API. I guess it's because it's still in beta.
Until the final release, how can I load a svg ? Is there already a library which can handle that, or do I need to parse myself the file and then create the corresponding shapes ?
Thanks
Based on the answer of this question I found a working solution.
1. Include references to the Batik SVG Toolkit jars
2. Implement your own Transcoder
(based on this answer by Devon_C_Miller)
class MyTranscoder extends ImageTranscoder {
private BufferedImage image = null;
#Override
public BufferedImage createImage(int w, int h) {
image = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
return image;
}
#Override
public void writeImage(BufferedImage img, TranscoderOutput out) {
}
public BufferedImage getImage() {
return image;
}
}
3. Get a BufferedImage from your svg
(based on hint in this answer by John Doppelmann)
String uri = "path_to_svg/some.svg";
MyTranscoder transcoder = new MyTranscoder();
TranscodingHints hints = new TranscodingHints();
hints.put(ImageTranscoder.KEY_WIDTH, 20f); //your image width
hints.put(ImageTranscoder.KEY_HEIGHT, 20f); //your image height
hints.put(ImageTranscoder.KEY_DOM_IMPLEMENTATION, SVGDOMImplementation.getDOMImplementation());
hints.put(ImageTranscoder.KEY_DOCUMENT_ELEMENT_NAMESPACE_URI, SVGConstants.SVG_NAMESPACE_URI);
hints.put(ImageTranscoder.KEY_DOCUMENT_ELEMENT, SVGConstants.SVG_SVG_TAG);
hints.put(ImageTranscoder.KEY_XML_PARSER_VALIDATING, false);
transcoder.setTranscodingHints(hints);
TranscoderInput input = new TranscoderInput(url.toExternalForm());
transcoder.transcode(input, null);
BufferedImage bufferedImage = transcoder.getImage();
4. Create an InputStream from BufferedImage
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
JPEGImageEncoder imageEncoder = JPEGCodec.createJPEGEncoder(outputStream);
imageEncoder.encode(bufferedImage);
byte[] bytes = outputStream.toByteArray();
InputStream inputStream = new ByteArrayInputStream(bytes);
5. Add the image to your ImageView
//javafx.scene.image.Image
Image image = new Image(inputStream);
//javafx.scene.image.ImageView
ImageView imageView = new ImageView();
imageView.setImage(image);
this.getChildren().add(imageView);
Hope this will help!
I used #pmoule's answer above, but had better luck replacing his steps 3-5 with this:
MyTranscoder imageTranscoder = new MyTranscoder();
imageTranscoder.addTranscodingHint(PNGTranscoder.KEY_WIDTH, (float) width);
imageTranscoder.addTranscodingHint(PNGTranscoder.KEY_HEIGHT, (float) height);
TranscoderInput input = new TranscoderInput(new FileReader(file));
imageTranscoder.transcode(input, null);
BufferedImage bimage = imageTranscoder.getImage();
WritableImage wimage = SwingFXUtils.toFXImage(bimage, null);
ImageView imageView = new ImageView(wimage);
<panel-vbox-whatever>.getChildren().clear();
<panel-vbox-whatever>.getChildren().add(imageView);
Simpler, and it just worked better for me.
Try http://forums.oracle.com/forums/thread.jspa?threadID=2264379&tstart=0
you do not need batik, you can try WebView like this:
WebView view = new WebView();
view.setMinSize(500, 400);
view.setPrefSize(500, 400);
final WebEngine eng = view.getEngine();
eng.load("http://127.0.0.1/demo1.svg");
Convert the SVG to FXML and then it becomes trivial. The conversion can be done with XSLT. You can get the stylesheet from here:
http://jayskills.com/blog/2012/06/03/svg-to-fxml-using-netbeans/
Then after conversion to FXML you can load it as a Node with the built-in FXMLLoader:
FXMLLoader.setDefaultClassLoader(this.getClass().getClassLoader());
URL location = KayakMain.class.getResource("path/to/my/resource.fxml");
Parent root = FXMLLoader.load(location);