Google maps snapshot getting removed from Canvas - java

I'm trying to create a screenshot of my Activity with a Google map in it. My code stopped working after updating my Google API's from 7.xx to 8.1.0`, but i can't find any relevant changes in their changelogs.
This is what happens: I take a screenshot of my activity with view.getDrawingCache. I then draw the snapshot of the map on the correct position on my canvas. This all works fine. When i want to draw my "backBitmap" over my snapshot, it removed the snapshot from the Canvas again.
How come my maps snapshot is removed from the canvas as soon as i call this line?:
canvas.drawBitmap(backBitmap, 0, 0, null);
Entire function:
GoogleMap.SnapshotReadyCallback callback = new GoogleMap.SnapshotReadyCallback() {
#Override
public void onSnapshotReady(Bitmap snapshot) {
try {
view.setDrawingCacheEnabled(true);
Bitmap backBitmap = view.getDrawingCache();
Bitmap bmOverlay = Bitmap.createBitmap(backBitmap.getWidth(), backBitmap.getHeight() - DpConverter.dpToPx(50), backBitmap.getConfig());
Canvas canvas = new Canvas(bmOverlay);
canvas.drawBitmap(snapshot, 0, mapView.getTop() + getStatusBarHeight(view), null);
canvas.drawBitmap(backBitmap, 0, 0, null);
FileOutputStream out = new FileOutputStream(Environment.getExternalStorageDirectory().getAbsolutePath() + "/Pictures/AppScreenshots/" + name);
bmOverlay.compress(Bitmap.CompressFormat.PNG, 90, out);
view.setDrawingCacheEnabled(false);
} catch (Exception e) {
e.printStackTrace();
}
}
};

Related

Android MLKit face detection not detecting faces when using Bitmap

I have an XR app, where display shows the camera (rear) feed. As such, capturing the screen is pretty much the same as capturing the camera feed...
As such, I take screenshots (Bitmaps) and then try to detect faces within them using Googles MLKit.
I'm following the official guide to detect faces.
To do this, I first init my face detector:
FaceDetector detector;
public MyFaceDetector(){
FaceDetectorOptions realTimeOpts =
new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
detector = FaceDetection.getClient(realTimeOpts);
}
I then have a function which passes in a bitmap. I first convert the bitmap to a byte array. I do this because InputImage.fromBitmap is very slow, and MLKit actually tells me that I should use a byte array:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 85, byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream .toByteArray();
Next I make a mutable copy of the Bitmap (so that I can draw onto it), and set up a Canvas object, along with a color that will be used when drawing on to the Bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inMutable = true;
Bitmap bmp = BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length, options);
Canvas canvas = new Canvas(bmp);
Paint p = new Paint();
p.setColor(Color.RED);
After all is set up, I create an InputImage (used by the FaceDetector), using the byte array:
InputImage image = InputImage.fromByteArray(byteArray, bmp.getWidth(), bmp.getHeight(),0, InputImage.IMAGE_FORMAT_NV21);
Note the image format... There is a InputImage.IMAGE_FORMAT_BITMAP, but using this throws an IllegalArgumentException. Anyway, I next try to process the Bitmap, detect faces, fill each detected face with the color defined earlier, and then save the Bitmap to disk:
Task<List<Face>> result = detector.process(image).addOnSuccessListener(
new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(List<Face> faces) {
Log.e("FACE DETECTION APP", "NUMBER OF FACES: " + faces.size());
Thread processor = new Thread(new Runnable() {
#Override
public void run() {
for (Face face : faces) {
Rect destinationRect = face.getBoundingBox();
canvas.drawRect(destinationRect, p);
canvas.save();
Log.e("FACE DETECTION APP", "WE GOT SOME FACCES!!!");
}
File file = new File(someFilePath);
try {
FileOutputStream fOut = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 85, fOut);
fOut.flush();
fOut.close();
} catch (Exception e) {
e.printStackTrace();
}
}
});
processor.start();
}
})
.addOnFailureListener(
new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
// Task failed with an exception
// ...
}
});
}
While this code runs (i.e. no exceptions) and the bitmap is correctly written to disk, no faces are ever detected (faces.size() is always 0). I've tried rotating the image. I've tried changing the quality of the Bitmap. I've tried with and without the thread to process any detected faces. I've tried everything I can think of.
Anyone have any ideas?
ML Kit InputImage. fromByteArray only support yv12 and nv21 formats. You will need to convert the bitmap to one of these formats in order for ML kit pipeline to process. Also, if the original image you have is a bitmap, you can probably just use InputImage.fromBitmap to construct an InputImage. It shouldn't be slower than your current approach.
I was having the same issue use ImageInput.fromMediaImage(..., ...)
override fun analyze(image: ImageProxy) {
val mediaImage: Image = image.image.takeIf { it != null } ?: run {
image.close()
return
}
val inputImage = InputImage.fromMediaImage(mediaImage, image.imageInfo.rotationDegrees)
// TODO: Your ML Code
}
Check here for more details
https://developers.google.com/ml-kit/vision/image-labeling/android

Android LibVLC getBitmap from a TextureView

I am trying to retrieve a frame from a video that is playing back using LibVLC in android. For reference, this is how I am starting LibVLC. ffmpegSv is a TextureView
public void startMediaPlayer() {
ArrayList<String> options = new ArrayList<>();
options.add("--no-drop-late-frames");
options.add("--no-skip-frames");
options.add("-vvv");
options.add("--no-osd");
options.add("--rtsp-tcp");
options.add("--no-snapshot-preview");
options.add("--no-video-title");
options.add("--no-spu");
videoVlc = new LibVLC(getActivity(), options);
TextureView surfaceView = (TextureView) getActivity().findViewById(R.id.streamView);
newVideoMediaPlayer = new org.videolan.libvlc.MediaPlayer(videoVlc);
final IVLCVout vOut = newVideoMediaPlayer.getVLCVout();
vOut.setVideoSurface(ffmpegSv.getSurfaceTexture());
vOut.setWindowSize(ffmpegSv.getWidth(), ffmpegSv.getHeight());
vOut.attachViews();
Media videoMedia = new Media (videoVlc, Uri.parse("rtsp://1.1.1.1/abc.mov"));
newVideoMediaPlayer.setMedia(videoMedia);
newVideoMediaPlayer.play();
}
And this is how I am attempting to get the bitmap from it. I should note this method worked correctly when using the android MediaPlayer.
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
if (mStream != null) {
if (idx++ % 10 == 0) {
(new Runnable() {
#Override
public void run() {
FileOutputStream out = null;
Bitmap b = ffmpegSv.getBitmap(ffmpegSv.getWidth(), ffmpegSv.getHeight());
Bitmap bm = Bitmap.createScaledBitmap(b2, 640, 480, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 50, bos);
byte[] arr = bos.toByteArray();
mStream.onJpegFrame(arr, 0L);
b.recycle();
bm.recycle();
}
}).run();
idx = 0;
}
}
}
However, the image that is being produced has a sliver of the original image from the TextureView around the edge almost like a border, but the rest of the image is obscured by a black box.
The only thing I can think of is that VLC uses some sort of overlay for subtitles etc that when pulled out with getBitmap() is losing its transparency. However, I am not 100% sure this is the case. Is there a way to check if this is the case, or disable any sort of overlays that VLC could be adding?
EDIT : I have added a sample image to demonstrate the problem:
You can just make out the bottom, right and top of the background image and a clear rectangle over the top of it.
Bitmap b = ffmpegSv.getBitmap(ffmpegSv.getWidth(), ffmpegSv.getHeight());
Bitmap bm = Bitmap.createScaledBitmap(b2, 640, 480, true);
Aren't you scaling something else here?
What is b2?

Taking a photo upon QR code scan

I have an application that has zxing integrated. I've been looking at trying to store a photo when a QR code is scanned. Sean Owen recommended the following:
"The app is getting a continuous stream of frames from the camera to analyze. You can store off any of them by intercepting them in the preview callback."
As far as I am aware the only instances of preview callback is within the CameraManager.java activity (https://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/camera/CameraManager.java).
In particular:
public synchronized void requestPreviewFrame(Handler handler, int message) {
Camera theCamera = camera;
if (theCamera != null && previewing) {
previewCallback.setHandler(handler, message);
theCamera.setOneShotPreviewCallback(previewCallback);
}}
Since this runs every frame I don't have a method of saving (preferably as byte date) any particular frame. I would have assumed there to be a point where something is passed back to the CaptureActivity.java class (Link given at bottom) however I haven't found anything myself.
Anyone who has used Zxing will know that after a scan a ghostly image is shown on screen of the scan data, if it is possible to hijack this part of the code and convert and/or save that data as byte code that may also be useful.
Any help, or other ideas would be very appreciated. Requests for any further information will be responded to quickly. Thank you.
Full code available within this folder: https://code.google.com/p/zxing/source/browse/trunk#trunk%2Fandroid%2Fsrc%2Fcom%2Fgoogle%2Fzxing%2Fclient%2Fandroid
Update:
So far the following sections of code appear to be possible places to save byte data, both are within the DecodeHandler.java class.
private void decode(byte[] data, int width, int height) {
long start = System.currentTimeMillis();
Result rawResult = null;
PlanarYUVLuminanceSource source = activity.getCameraManager().buildLuminanceSource(data, width, height);
if (source != null) {
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
//here?
try {
rawResult = multiFormatReader.decodeWithState(bitmap);
} catch (ReaderException re) {
// continue
} finally {
multiFormatReader.reset();
}
}
Handler handler = activity.getHandler();
if (rawResult != null) {
// Don't log the barcode contents for security.
long end = System.currentTimeMillis();
Log.d(TAG, "Found barcode in " + (end - start) + " ms");
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_succeeded, rawResult);
Bundle bundle = new Bundle();
Bitmap grayscaleBitmap = toBitmap(source, source.renderCroppedGreyscaleBitmap());
//I believe this bitmap is the one shown on screen after a scan has been performed
bundle.putParcelable(DecodeThread.BARCODE_BITMAP, grayscaleBitmap);
message.setData(bundle);
message.sendToTarget();
}
} else {
if (handler != null) {
Message message = Message.obtain(handler, R.id.decode_failed);
message.sendToTarget();
}
}}
private static Bitmap toBitmap(LuminanceSource source, int[] pixels) {
int width = source.getWidth();
int height = source.getHeight();
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, width, 0, 0, width, height);
//saving the bitmnap at this point or slightly sooner, before grey scaling could work.
return bitmap;}
Update: Requested code found within PreviewCallback.java
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
if (cameraResolution != null && thePreviewHandler != null) {
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
The data from preview callback is NV21 format. So if you wanna save it, you could use such code:
YuvImage im = new YuvImage(byteArray, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
} catch (FileNotFoundException e) {
} catch (IOException e) {
}
The save point is when the ZXing could decode the byte[] and return content String successfully.

Saving canvas to bitmap on Android

I'm having some difficulty with regards to placing the contents of a Canvas into a Bitmap. When I attempt to do this, the file gets written with a file size of around 5.80KB but it appears to be completely empty (every pixel is '#000').
The canvas draws a series of interconnected lines that are formed by handwriting. Below is my onDraw for the View. (I'm aware that it's blocking the UI thread / bad practices/ etc.., however I just need to get it working)
Thank you.
#Override
protected void onDraw(Canvas canvas) {
// TODO Auto-generated method stub
super.onDraw(canvas);
if (IsTouchDown) {
// Calculate the points
Path currentPath = new Path();
boolean IsFirst = true;
for(Point point : currentPoints){
if(IsFirst){
IsFirst = false;
currentPath.moveTo(point.x, point.y);
} else {
currentPath.lineTo(point.x, point.y);
}
}
// Draw the path of points
canvas.drawPath(currentPath, pen);
// Attempt to make the bitmap and write it to a file.
Bitmap toDisk = null;
try {
// TODO: Get the size of the canvas, replace the 640, 480
toDisk = Bitmap.createBitmap(640,480,Bitmap.Config.ARGB_8888);
canvas.setBitmap(toDisk);
toDisk.compress(Bitmap.CompressFormat.JPEG, 100, new FileOutputStream(new File("arun.jpg")));
} catch (Exception ex) {
}
} else {
// Clear the points
currentPoints.clear();
}
}
I had similar problem and i've got solution. Here full code of a task /don't forget about android.permission.WRITE_EXTERNAL_STORAGE permission in manifest/
public Bitmap saveSignature(){
Bitmap bitmap = Bitmap.createBitmap(this.getWidth(), this.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
this.draw(canvas);
File file = new File(Environment.getExternalStorageDirectory() + "/sign.png");
try {
bitmap.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(file));
} catch (Exception e) {
e.printStackTrace();
}
return bitmap;
}
first create a blank bitmap , then create a canvas with that blank bitmap
Bitmap.Config conf = Bitmap.Config.ARGB_8888;
Bitmap bitmap_object = Bitmap.createBitmap(width, height, conf);
Canvas canvas = new Canvas(bitmap_object);
now draw your lines on canvas
Path currentPath = new Path();
boolean IsFirst = true;
for(Point point : currentPoints){
if(IsFirst){
IsFirst = false;
currentPath.moveTo(point.x, point.y);
} else {
currentPath.lineTo(point.x, point.y);
}
}
// Draw the path of points
canvas.drawPath(currentPath, pen);
Now access your bitmap via bitmap_object
You'll have to draw after setting the bitmap to the canvas. Also use a new Canvas object like this:
Canvas canvas = new Canvas(toDisk);
canvas.drawPath(currentPath, pen);
toDisk.compress(Bitmap.CompressFormat.PNG, 100, new FileOutputStream(new File("arun.png")));
I recommend using PNG for saving images of paths.
you must call canvas.setBitmap(bitmap); before drawing anything on Canvas. After calling canvas.setBitmap(bitmap); draw on Canvas and then save the Bitmap you passed to Canvas.
May be
canvas.setBitmap(toDisk);
is not in correct place.
Try this :
toDisk = Bitmap.createBitmap(640,480,Bitmap.Config.ARGB_8888);
toDisk.compress(Bitmap.CompressFormat.JPEG, 100, new FileOutputStream(new File("arun.jpg")));
canvas.setBitmap(toDisk);

Make an Image as a backgound in java android for edit

I had created an image by the following code:
Bitmap bmp = Bitmap.createBitmap(512, 512, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bmp);
try {
int tile1ResID = getResources().getIdentifier("drawable/m" + String.valueOf(tileOfPoint.x) + "_" + String.valueOf(tileOfPoint.y), "drawable", "com.example.sabtt1");
canvas.drawBitmap(Bitmap.createScaledBitmap(BitmapFactory.decodeResource(getResources(), tile1ResID), 256, 256, true), 0, 0, null);
canvas.save();
}
catch (Exception e) {
}
canvas.save();
ImageView imgMap1 = (ImageView) findViewById(R.id.imgMap1);
imgMap1.setImageDrawable(new BitmapDrawable(Bitmap.createBitmap(bmp, 0, 0, 500, 500)));
now I want to make it as background. So that the user can add an image or draw on it.
How can I set it as background?
Is it possible to add the image and draw by finger at the same time?
I use this code for draw:
#Override
public void onClick(View arg0) {
try {
Bitmap gestureImg = gesture.getGesture().toBitmap(100, 100, 8, Color.BLACK);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
gestureImg.compress(Bitmap.CompressFormat.PNG, 100, bos);
byte[] bArray = bos.toByteArray();
Intent intent = new Intent(Activity1.this, Activity2.class);
intent.putExtra("draw", bArray);
startActivity(intent);
}
catch (Exception e) {
e.printStackTrace();
Toast.makeText(Activity1.this, "No draw on the string", 3000).show();
}
}
and OnDragListener for add and move images.
I know that I should use the folowing code for background:
LinearLayout ll = new LinearLayout(this);
ll.setBackgroundResource(R.drawable.nn);
this.setContentView(ll);
but by using this code, I can't see other images.
Thanks in advance.
You can just draw on the main bitmap the other bitmaps:
Bitmap bmp = Bitmap.createBitmap(512, 512, Bitmap.Config.ARGB_8888);
bmp = <load from resource>
Canvas canvas = new Canvas(bmp);
Bitmap gestureImg = <get bitmap from gesture or loading>
canvas.drawBitmap(gestureImg);
then load the final bitmap:
ImageView imgMap1 = (ImageView) findViewById(R.id.imgMap1);
imgMap1.setImageDrawable(new BitmapDrawable(Bitmap.createBitmap(bmp, 0, 0, 500, 500)));
Or you can store the main bitmap and all other gesture/loaded bitmaps in separate objects and when you want to update the imgMap1 you make your combined bitmap;
Also to use as background the new bitmap use:
ll.setBackgroundDrawable( new BitmapDrawable( bmp ) );

Categories