I have an application that has zxing integrated. I've been looking at trying to store a photo when a QR code is scanned. Sean Owen recommended the following:
"The app is getting a continuous stream of frames from the camera to analyze. You can store off any of them by intercepting them in the preview callback."
As far as I am aware the only instances of preview callback is within the CameraManager.java activity (https://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/camera/CameraManager.java).
In particular:
public synchronized void requestPreviewFrame(Handler handler, int message) {
Camera theCamera = camera;
if (theCamera != null && previewing) {
previewCallback.setHandler(handler, message);
theCamera.setOneShotPreviewCallback(previewCallback);
}}
Since this runs every frame I don't have a method of saving (preferably as byte date) any particular frame. I would have assumed there to be a point where something is passed back to the CaptureActivity.java class (Link given at bottom) however I haven't found anything myself.
Anyone who has used Zxing will know that after a scan a ghostly image is shown on screen of the scan data, if it is possible to hijack this part of the code and convert and/or save that data as byte code that may also be useful.
Any help, or other ideas would be very appreciated. Requests for any further information will be responded to quickly. Thank you.
Full code available within this folder: https://code.google.com/p/zxing/source/browse/trunk#trunk%2Fandroid%2Fsrc%2Fcom%2Fgoogle%2Fzxing%2Fclient%2Fandroid
Update:
So far the following sections of code appear to be possible places to save byte data, both are within the DecodeHandler.java class.
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
Handler handler = activity.getHandler();
if (rawResult != null) {
// Don't log the barcode contents for security.
long end = System.currentTimeMillis();
Log.d(TAG, "Found barcode in " + (end - start) + " ms");
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult);
Bundle bundle = new Bundle();
Bitmap grayscaleBitmap = toBitmap(source, source.renderCroppedGreyscaleBitmap());
//I believe this bitmap is the one shown on screen after a scan has been performed
bundle.putParcelable(DecodeThread.BARCODE_BITMAP, grayscaleBitmap);
message.setData(bundle);
message.sendToTarget();
}
} else {
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_failed);
message.sendToTarget();
}
}}
private static Bitmap toBitmap(LuminanceSource source, int[] pixels) {
int width = source.getWidth();
int height = source.getHeight();
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
//saving the bitmnap at this point or slightly sooner, before grey scaling could work.
return bitmap;}
Update: Requested code found within PreviewCallback.java
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
if (cameraResolution != null && thePreviewHandler != null) {
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
The data from preview callback is NV21 format. So if you wanna save it, you could use such code:
YuvImage im = new YuvImage(byteArray, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
} catch (FileNotFoundException e) {
} catch (IOException e) {
}
The save point is when the ZXing could decode the byte[] and return content String successfully.
Related
I have an XR app, where display shows the camera (rear) feed. As such, capturing the screen is pretty much the same as capturing the camera feed...
As such, I take screenshots (Bitmaps) and then try to detect faces within them using Googles MLKit.
I'm following the official guide to detect faces.
To do this, I first init my face detector:
FaceDetector detector;
public MyFaceDetector(){
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
detector = FaceDetection.getClient(realTimeOpts);
}
I then have a function which passes in a bitmap. I first convert the bitmap to a byte array. I do this because InputImage.fromBitmap is very slow, and MLKit actually tells me that I should use a byte array:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream .toByteArray();
Next I make a mutable copy of the Bitmap (so that I can draw onto it), and set up a Canvas object, along with a color that will be used when drawing on to the Bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, options);
Canvas canvas = new Canvas(bmp);
Paint p = new Paint();
p.setColor(Color.RED);
After all is set up, I create an InputImage (used by the FaceDetector), using the byte array:
InputImage image = InputImage.fromByteArray(byteArray, bmp.getWidth(), bmp.getHeight(),0, InputImage.IMAGE_FORMAT_NV21);
Note the image format... There is a InputImage.IMAGE_FORMAT_BITMAP, but using this throws an IllegalArgumentException. Anyway, I next try to process the Bitmap, detect faces, fill each detected face with the color defined earlier, and then save the Bitmap to disk:
Task<List<Face>> result = detector.process(image).addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(List<Face> faces) {
Log.e("FACE DETECTION APP", "NUMBER OF FACES: " + faces.size());
Thread processor = new Thread(new Runnable() {
#Override
public void run() {
for (Face face : faces) {
Rect destinationRect = face.getBoundingBox();
canvas.drawRect(destinationRect, p);
canvas.save();
Log.e("FACE DETECTION APP", "WE GOT SOME FACCES!!!");
}
File file = new File(someFilePath);
try {
FileOutputStream fOut = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
processor.start();
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
}
While this code runs (i.e. no exceptions) and the bitmap is correctly written to disk, no faces are ever detected (faces.size() is always 0). I've tried rotating the image. I've tried changing the quality of the Bitmap. I've tried with and without the thread to process any detected faces. I've tried everything I can think of.
Anyone have any ideas?
ML Kit InputImage. fromByteArray only support yv12 and nv21 formats. You will need to convert the bitmap to one of these formats in order for ML kit pipeline to process. Also, if the original image you have is a bitmap, you can probably just use InputImage.fromBitmap to construct an InputImage. It shouldn't be slower than your current approach.
I was having the same issue use ImageInput.fromMediaImage(..., ...)
override fun analyze(image: ImageProxy) {
val mediaImage: Image = image.image.takeIf { it != null } ?: run {
image.close()
return
}
val inputImage = InputImage.fromMediaImage(mediaImage, image.imageInfo.rotationDegrees)
// TODO: Your ML Code
}
Check here for more details
https://developers.google.com/ml-kit/vision/image-labeling/android
I am trying to retrieve a frame from a video that is playing back using LibVLC in android. For reference, this is how I am starting LibVLC. ffmpegSv is a TextureView
public void startMediaPlayer() {
ArrayList<String> options = new ArrayList<>();
options.add("--no-drop-late-frames");
options.add("--no-skip-frames");
options.add("-vvv");
options.add("--no-osd");
options.add("--rtsp-tcp");
options.add("--no-snapshot-preview");
options.add("--no-video-title");
options.add("--no-spu");
videoVlc = new LibVLC(getActivity(), options);
TextureView surfaceView = (TextureView) getActivity().findViewById(R.id.streamView);
newVideoMediaPlayer = new org.videolan.libvlc.MediaPlayer(videoVlc);
final IVLCVout vOut = newVideoMediaPlayer.getVLCVout();
vOut.setVideoSurface(ffmpegSv.getSurfaceTexture());
vOut.setWindowSize(ffmpegSv.getWidth(), ffmpegSv.getHeight());
vOut.attachViews();
Media videoMedia = new Media (videoVlc, Uri.parse("rtsp://1.1.1.1/abc.mov"));
newVideoMediaPlayer.setMedia(videoMedia);
newVideoMediaPlayer.play();
}
And this is how I am attempting to get the bitmap from it. I should note this method worked correctly when using the android MediaPlayer.
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
if (mStream != null) {
if (idx++ % 10 == 0) {
(new Runnable() {
#Override
public void run() {
FileOutputStream out = null;
Bitmap b = ffmpegSv.getBitmap(ffmpegSv.getWidth(), ffmpegSv.getHeight());
Bitmap bm = Bitmap.createScaledBitmap(b2, 640, 480, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 50, bos);
byte[] arr = bos.toByteArray();
mStream.onJpegFrame(arr, 0L);
b.recycle();
bm.recycle();
}
}).run();
idx = 0;
}
}
}
However, the image that is being produced has a sliver of the original image from the TextureView around the edge almost like a border, but the rest of the image is obscured by a black box.
The only thing I can think of is that VLC uses some sort of overlay for subtitles etc that when pulled out with getBitmap() is losing its transparency. However, I am not 100% sure this is the case. Is there a way to check if this is the case, or disable any sort of overlays that VLC could be adding?
EDIT : I have added a sample image to demonstrate the problem:
You can just make out the bottom, right and top of the background image and a clear rectangle over the top of it.
Bitmap b = ffmpegSv.getBitmap(ffmpegSv.getWidth(), ffmpegSv.getHeight());
Bitmap bm = Bitmap.createScaledBitmap(b2, 640, 480, true);
Aren't you scaling something else here?
What is b2?
Let's say that I want to load an shp file, do my stuff on it and save the map as an image.
In order to save an image I am using:
public void saveImage(final MapContent map, final String file, final int imageWidth) {
GTRenderer renderer = new StreamingRenderer();
renderer.setMapContent(map);
Rectangle imageBounds = null;
ReferencedEnvelope mapBounds = null;
try {
mapBounds = map.getMaxBounds();
double heightToWidth = mapBounds.getSpan(1) / mapBounds.getSpan(0);
imageBounds = new Rectangle(0, 0, imageWidth, (int) Math.round(imageWidth * heightToWidth));
} catch (Exception e) {
// Failed to access map layers
throw new RuntimeException(e);
}
BufferedImage image = new BufferedImage(imageBounds.width, imageBounds.height, BufferedImage.TYPE_INT_RGB);
Graphics2D gr = image.createGraphics();
gr.setPaint(Color.WHITE);
gr.fill(imageBounds);
try {
renderer.paint(gr, imageBounds, mapBounds);
File fileToSave = new File(file);
ImageIO.write(image, "png", fileToSave);
} catch (IOException e) {
throw new RuntimeException(e);
}
}
But, let's say I am doing something like this:
...
MapContent map = new MapContent();
map.setTitle("TEST");
map.addLayer(layer);
map.addLayer(shpLayer);
// zoom into the line
MapViewport viewport = new MapViewport(featureCollection.getBounds());
map.setViewport(viewport);
saveImage(map, "/tmp/img.png", 800);
1) The problem is that the zoom level isn't saved on the image file.Is there a way to save it?
2) When I am doing MapViewport(featureCollection.getBounds()); is there a way to extend a little bit the boundaries in order to have a better visual representation?
...
The reason that you aren't saving the map at the current zoom level is that in your saveImage method you have the line:
mapBounds = map.getMaxBounds();
which always uses the full extent of the map, you can change this to
mapBounds = map.getViewport().getBounds();
You can expand a bounding box by something like:
ReferencedEnvelope bounds = featureCollection.getBounds();
double delta = bounds.getWidth()/20.0; //5% on each side
bounds.expandBy(delta );
MapViewport viewport = new MapViewport(bounds);
map.setViewport(viewport );
A quicker (and easier) way to save a map from the GUI is to use a method like this which just saves exactly what is on the screen:
public void drawMapToImage(File outputFile, String outputType,
JMapPane mapPane) {
ImageOutputStream outputImageFile = null;
FileOutputStream fileOutputStream = null;
try {
fileOutputStream = new FileOutputStream(outputFile);
outputImageFile = ImageIO.createImageOutputStream(fileOutputStream);
RenderedImage bufferedImage = mapPane.getBaseImage();
ImageIO.write(bufferedImage, outputType, outputImageFile);
} catch (IOException ex) {
ex.printStackTrace();
} finally {
try {
if (outputImageFile != null) {
outputImageFile.flush();
outputImageFile.close();
fileOutputStream.flush();
fileOutputStream.close();
}
} catch (IOException e) {// don't care now
}
}
}
For the last few weeks I have been attempting to alter Zxing to take a photo immediately upon scan. Thanks to help I am at a point where I can be consistently saving an image from the onPreviewFrame class within PreviewCallback.java
The code I use within the onPreviewMethod method shall follow, and then a short rundown of how my app works.
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
android.hardware.Camera.Parameters parameters = camera.getParameters();
android.hardware.Camera.Size size = parameters.getPreviewSize();
int height = size.height;
int width = size.width;
System.out.println("HEIGHT IS" + height);
System.out.println("WIDTH IS" + width);
if (cameraResolution != null && thePreviewHandler != null) {
YuvImage im = new YuvImage(data, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
My application centers around its own GUI and functionality, but can engage Zxing via intent (Zxing is built into the apps build path, yes this is bad as it can intefere if Zxing is already installed). Once Zxing has scanned a QR code, the information encoded on it is returned to my app and stored, and then after a short delay Zxing is automatically re-initiated.
My current code saves an image every frame whilst Zxing is running, the functionality I would like is to have only the frame on scan be saved. Although Zxing stops saving images in the short window where my app takes over again, Zxing is quickly re-initialized however and I may not have time to manipulate the data. A possible workaround however is quickly renaming the saved file so that Zxing doesn't start overwriting it and manipulation can be performed in the background. Nevertheless, saving an image every frame is a waste of resources and less than preferable.
How do I only save an image upon scan?
Thanks in advance.
Updated to show found instances of multiFormatReader as requested:
private final CaptureActivity activity;
private final MultiFormatReader multiFormatReader;
private boolean running = true;
DecodeHandler(CaptureActivity activity, Map<DecodeHintType,Object> hints) {
multiFormatReader = new MultiFormatReader();
multiFormatReader.setHints(hints);
this.activity = activity;
}
#Override
public void handleMessage(Message message) {
if (!running) {
return;
}
if (message.what == R.id.decode) {
decode((byte[]) message.obj, message.arg1, message.arg2);
} else if (message.what == R.id.quit) {
running = false;
Looper.myLooper().quit();
}}
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
ZXing detects every received frame until finds out correct information. The image saving point is when ZXing returns a string which is not null. In addition, you can save file with different name "timestamp + .jpg", in case previous file will be overwritten.
I'm having some difficulty with regards to placing the contents of a Canvas into a Bitmap. When I attempt to do this, the file gets written with a file size of around 5.80KB but it appears to be completely empty (every pixel is '#000').
The canvas draws a series of interconnected lines that are formed by handwriting. Below is my onDraw for the View. (I'm aware that it's blocking the UI thread / bad practices/ etc.., however I just need to get it working)
Thank you.
#Override
protected void onDraw(Canvas canvas) {
// TODO Auto-generated method stub
super.onDraw(canvas);
if (IsTouchDown) {
// Calculate the points
Path currentPath = new Path();
boolean IsFirst = true;
for(Point point : currentPoints){
if(IsFirst){
IsFirst = false;
currentPath.moveTo(point.x, point.y);
} else {
currentPath.lineTo(point.x, point.y);
}
}
// Draw the path of points
canvas.drawPath(currentPath, pen);
// Attempt to make the bitmap and write it to a file.
Bitmap toDisk = null;
try {
// TODO: Get the size of the canvas, replace the 640, 480
toDisk = Bitmap.createBitmap(640,480,Bitmap.Config.ARGB_8888);
canvas.setBitmap(toDisk);
toDisk.compress(Bitmap.CompressFormat.JPEG, 100, new FileOutputStream(new File("arun.jpg")));
} catch (Exception ex) {
}
} else {
// Clear the points
currentPoints.clear();
}
}
I had similar problem and i've got solution. Here full code of a task /don't forget about android.permission.WRITE_EXTERNAL_STORAGE permission in manifest/
public Bitmap saveSignature(){
Bitmap bitmap = Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
this.draw(canvas);
File file = new File(Environment.getExternalStorageDirectory() + "/sign.png");
try {
bitmap.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(file));
} catch (Exception e) {
e.printStackTrace();
}
return bitmap;
}
first create a blank bitmap , then create a canvas with that blank bitmap
Bitmap.Config conf = Bitmap.Config.ARGB_8888;
Bitmap bitmap_object = Bitmap.createBitmap(width, height, conf);
Canvas canvas = new Canvas(bitmap_object);
now draw your lines on canvas
Path currentPath = new Path();
boolean IsFirst = true;
for(Point point : currentPoints){
if(IsFirst){
IsFirst = false;
currentPath.moveTo(point.x, point.y);
} else {
currentPath.lineTo(point.x, point.y);
}
}
// Draw the path of points
canvas.drawPath(currentPath, pen);
Now access your bitmap via bitmap_object
You'll have to draw after setting the bitmap to the canvas. Also use a new Canvas object like this:
Canvas canvas = new Canvas(toDisk);
canvas.drawPath(currentPath, pen);
toDisk.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(new File("arun.png")));
I recommend using PNG for saving images of paths.
you must call canvas.setBitmap(bitmap); before drawing anything on Canvas. After calling canvas.setBitmap(bitmap); draw on Canvas and then save the Bitmap you passed to Canvas.
May be
canvas.setBitmap(toDisk);
is not in correct place.
Try this :
toDisk = Bitmap.createBitmap(640,480,Bitmap.Config.ARGB_8888);
toDisk.compress(Bitmap.CompressFormat.JPEG, 100, new FileOutputStream(new File("arun.jpg")));
canvas.setBitmap(toDisk);