Taking screenshot in a Background Service - java

I am trying to make a bubble app that takes screenshot of other apps.
I found this project link with a media projection sample but the images are not been saved to the device.
is there a way i can save this image to the device or is there any other way i can take a screenshot with my app in a background service.

I have checked the link and I think there's a way to save the file taken from screenshot.
From ImageTransmogrifier class
#Override
public void onImageAvailable(ImageReader reader) {
final Image image=imageReader.acquireLatestImage();
if (image!=null) {
Image.Plane[] planes=image.getPlanes();
ByteBuffer buffer=planes[0].getBuffer();
int pixelStride=planes[0].getPixelStride();
int rowStride=planes[0].getRowStride();
int rowPadding=rowStride - pixelStride * width;
int bitmapWidth=width + rowPadding / pixelStride;
if (latestBitmap == null ||
latestBitmap.getWidth() != bitmapWidth ||
latestBitmap.getHeight() != height) {
if (latestBitmap != null) {
latestBitmap.recycle();
}
latestBitmap=Bitmap.createBitmap(bitmapWidth,
height, Bitmap.Config.ARGB_8888);
}
latestBitmap.copyPixelsFromBuffer(buffer);
image.close();
ByteArrayOutputStream baos=new ByteArrayOutputStream();
Bitmap cropped=Bitmap.createBitmap(latestBitmap, 0, 0,
width, height);
cropped.compress(Bitmap.CompressFormat.PNG, 100, baos);
byte[] newPng=baos.toByteArray();
svc.processImage(newPng);
}
}
as you can notice there's a latestBitmap object there, from here you can actually save this bitmap to whatever place you want to save.
E.g. you can refer to this link https://www.simplifiedcoding.net/android-save-bitmap-to-gallery/ to save it on your Gallery :)

Related

Barcode generation and drawing image on PDF is very time consuming

I'm troubleshooting an issue where drawing an image onto a PDF document using the PDFBox library seems to take an excessive amount of time (compared to PHP where it's almost instantly). The code in question first creates a bar/qr code using Google com.google.zxing library and then adds that to the PDF document in question. Here are the methods in question:
Create the Barcode (one of the examples):
private static byte[] generateBarCode(String barcodeText, int width, int height, Writer writer, BarcodeFormat barcodeFormat) throws IOException, WriterException {
if (Objects.isNull(barcodeText) || width <= 0 || height <= 0) {
throw new IllegalArgumentException(String.format("generateBarCode: wrong input values %s %s %s %s", barcodeText, width, height, barcodeFormat.name()));
}
Map<EncodeHintType, Object> hintMap = new EnumMap<>(EncodeHintType.class);
hintMap.put(EncodeHintType.MARGIN, 0);
BitMatrix bitMatrix = writer.encode(barcodeText, barcodeFormat, width, height, hintMap);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
MatrixToImageWriter.writeToStream(bitMatrix, "png", bos);
bos.close();
return bos.toByteArray();
}
This is then passed to the below method as the byte[] image.
Draw the image on the PDF Document:
public static void drawImage(PDPageContentStream contentStream, int x, int y, byte[] image, PDDocument document, Integer width, Integer height) throws IOException {
ByteArrayInputStream bais = new ByteArrayInputStream(image);
BufferedImage bim = ImageIO.read(bais);
PDImageXObject pdImage = LosslessFactory.createFromImage(document, bim);
if (width != null && height != null) { // if we set the image width/height
contentStream.drawImage(pdImage, x, y, width, height);
return;
}
contentStream.drawImage(pdImage, x, y);
}
At time we need to create a PDF with 100s of pages and each page/item will have a different barcode which needs to be created and then drawn, along with a bunch of text and other non-image items. This can take upwards of 10+ minutes for a 1000 page document for example, however if I comment out the barcode generation and just draw an existing image from my classpath it can complete in ~60 seconds (which I still think is too long). If I comment out both then the 1000 document PDF basically gets rendered instantly from the template and adding all my text.
Is there a better way of doing this that I am overlooking?

Out of memory happened on showing sticker on imageview

a list of images and stickers(webp format) must be shown on a recycleview.
to show sticker on imageView, this [repository] (https://github.com/EverythingMe/webp-android) is used. this repository was one of suggested
solution on this post(WebP for Android)
sticker file is readed from external storage, convert to byte array, by using library of the repository, byte array convert to bitmap, and finally bitmap is shown on imageView. below code convert sticker file to bitmap
private void ShowStickerOnImageView(String stickerPath){
File file = new File(stickerPath);
int size = (int) file.length();
byte[] bytes = new byte[size];
BufferedInputStream buf = new BufferedInputStream(new FileInputStream(file));
buf.read(bytes, 0, bytes.length);
buf.close();
Bitmap bitmap = null;
boolean NATIVE_WEB_P_SUPPORT = Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2;
if (!NATIVE_WEB_P_SUPPORT) {
bitmap = WebPDecoder.getInstance().decodeWebP(bytes);
} else {
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
holder.imageView.setImageBitmap(bitmap);
}
.....
public Bitmap decodeWebP(byte[] encoded, int w, int h) {
int[] width = new int[]{w};
int[] height = new int[]{h};
byte[] decoded = decodeRGBAnative(encoded, encoded.length, width, height);
if (decoded.length == 0) return null;
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
return Bitmap.createBitmap(pixels, width[0], height[0], Bitmap.Config.ARGB_8888);
}
when 'NATIVE_WEB_P_SUPPORT' is false, 'decodeWebP' method is called, this method work fine in most of the time, but sometimes 'out of memory' error is happened on this method. most of the time, this error is happened on these lines
int[] pixels = new int[decoded.length / 4];
ByteBuffer.wrap(decoded).asIntBuffer().get(pixels);
return Bitmap.createBitmap(pixels, width[0], height[0], Bitmap.Config.ARGB_8888);
i found that byte array length of sticker file is big , can i decrease sticker file size programmatically? i want to find solution, to decrease byte array size.
You are creating a Bitmap that is being used as native size, but applied to an ImageView. Decrease the Bitmap to the size of the View:
Bitmap yourThumbnail= Bitmap.createScaledBitmap(
theOriginalBitmap,
desiredWidth,
desiredHeight,
false
);
Do note that:
public static Bitmap createBitmap(int colors[], int width, int height, Config config) {
return createBitmap(null, colors, 0, width, width, height, config);
}
Will call
public static Bitmap createBitmap(DisplayMetrics display, int colors[],
int offset, int stride, int width, int height, Config config)
And that will lead to:
Bitmap bm = nativeCreate(colors, offset, stride, width, height,
config.nativeInt, false);
Basically, you cannot create a huge Bitmap in memory, for no reason. If this is for phones, assume a 20 MB size for application.
An 800*600*4 image, yelds 1920000 bytes. Lower Image quality, such as using RGB_565 (half byte ammount per pixel, compared with RGB_8888), or pre-re-scale your source Bit map.

Android Camera Preview - take only a part of the screen data

I want to take only a part of the the screen data from a preview video callback to reduce the time of the process. The probleme is I only know how to take the whole screen with OnPreviewFrame:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
myData = data;
// +get camera resolution x, y
}
And then with this data get the image :
private Bitmap getBitmapFromYUV(byte[] data, int width, int height)
{
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return image;
}
And then I take the part of the image taken I want :
cutImage = Bitmap.createBitmap(image, xOffset, yOffset, customWidth, customHeight);
The problem is that I need to take lots of images to apply some image processing on it and that's why I want to reduce the time it takes to get the images. Instead of taking the whole screen and then crop it, I want to immediatly get the cropped image. Is there a way to get the part of the screen data ?
Ok I finally found something, I still record all the data of the camera but when using compressToJpeg I crop the picture with a custom Rect. Maybe there is something better to do before this but this is still a good improvement. Here are my changes :
yuvImage.compressToJpeg(new Rect(offsetX, offsetY, sizeCaptureX + offsetX, sizeCaptureY + offsetY ), 100, out);

Android: Really bad image quality when saving bitmap to sdcard

I am making an OCR app for Android, that will take a screenshot of some text, recognise it and search a key word on Google. If you haven't already realized, I'm trying to make a "Google Now on Tap" clone.
To make the OCR work better, I am first rotating the image, then filtering the image. First by getting rid of the status bar and the navigation bar, then converting it to grayscale, then sharpening.
But the image quality after filtering the image is extremely pixelated, and this greatly effects OCR accuracy.
Here are the images, before and after (just of an IFTTT email I got)
As you can see, the before image is much higher quality than the filtered and rotated one.
Here is my code for rotating, filtering and saving the image:
Firstly taking screenshot, then saving the screenshot.
public void getScreenshot()
{
try
{
Process sh = Runtime.getRuntime().exec("su", null, null);
OutputStream os = sh.getOutputStream();
os.write(("/system/bin/screencap -p " + _path).getBytes("ASCII"));
os.flush();
os.close();
sh.waitFor();
onPhotoTaken();
Toast.makeText(this, "Screenshot taken", Toast.LENGTH_SHORT).show();
}
catch (IOException e)
{
System.out.println("IOException");
}
catch (InterruptedException e)
{
System.out.println("InterruptedException");
}
}
Then, rotate the image:
protected void onPhotoTaken() {
_taken = true;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap bitmap = BitmapFactory.decodeFile(_path, options);
try {
ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
Log.v(TAG, "Orient: " + exifOrientation);
int rotate = 0;
switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}
Log.v(TAG, "Rotation: " + rotate);
if (rotate != 0) {
// Getting width & height of the given image.
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);
// Rotating Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
}
// Convert to ARGB_8888, required by tess
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
} catch (IOException e) {
Log.e(TAG, "Couldn't correct orientation: " + e.toString());
}
// _image.setImageBitmap( bitmap );
setImageFilters(bitmap);
}
Then, filter the image:
public void setImageFilters(Bitmap bmpOriginal)
{
//Start by cropping image
Bitmap croppedBitmap = ThumbnailUtils.extractThumbnail(bmpOriginal, 1080, 1420);
//Then convert to grayscale
int width, height;
height = 1420;
width = 1080;
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(croppedBitmap, 0, 0, paint);
//Finally, sharpen the image
double weight = 11;
double[][] sharpConfig = new double[][]
{
{ 0 , -2 , 0 },
{ -2, weight, -2 },
{ 0 , -2 , 0 }
};
ConvolutionMatrix convMatrix = new ConvolutionMatrix(3);
convMatrix.applyConfig(sharpConfig);
convMatrix.Factor = weight - 8;
Bitmap filteredBitmap = ConvolutionMatrix.computeConvolution3x3(bmpGrayscale, convMatrix);
//Start Optical Character Recognition
startOCR(filteredBitmap);
//Save filtered image
saveFiltered(filteredBitmap);
}
Then, saving the filtered and rotated image:
public void saveFiltered(Bitmap filteredBmp) {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
filteredBmp.compress(Bitmap.CompressFormat.JPEG, 20, bytes);
//You can create a new file name "test.jpg" in sdcard folder.
File f = new File("/sdcard/SimpleAndroidOCR/ocrgray.jpg");
f.createNewFile();
//Write the bytes in file
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
//Remember close the FileOutput
fo.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks heaps for anyone taking the time to help.
It was actually in my onPhotoTaken method. After taking and saving the screenshot in get screenshot, I am reading the file from the location it was saved to, then filtering it. I changed this line in the onPhotoTaken method:
options.inSampleSize = 4 to options.inSampleSize = 1
It does look like the jpeg compression is messing the image up. Try using a format better suited for images with sharp edges, such as of text. I would recommend png or even gif. You could also store the uncompressed BMP.
Jpeg compression works by exploiting the fact that in most pictures (nature, people, objects), sharp edges are not that visible to the human eye. This makes it really bad for storing sharp edged content, such as text.
Also, your image filter is effectively removing the anti-aliasing of the image, which further decreases the perceived image quality. That might be what you want to do, however, since it might make OCR easier.
I also missed the sampling size due to the images you uploaded being the same size here on the site. From the Android documentation:
If set to a value > 1, requests the decoder to subsample the original
image, returning a smaller image to save memory. The sample size is
the number of pixels in either dimension that correspond to a single
pixel in the decoded bitmap. For example, inSampleSize == 4 returns an
image that is 1/4 the width/height of the original, and 1/16 the
number of pixels. Any value <= 1 is treated the same as 1. Note: the
decoder uses a final value based on powers of 2, any other value will
be rounded down to the nearest power of 2.
Setting options.inSampleSize = 4; to 1 instead will increase the quality.

Canvas bitmap to byte array and reading it back trouble

I have a Canvas I draw on, I'm trying to take the bitmap out, convert it to a byte array and save it serialized into a file. then later open, deserialize, and apply the bitmap back to the canvas. In the code below everything seems to work well except that when applying the bitmap to canvas nothing appears. can someone please show me where I'm going wrong.
public byte[] getCanvasData(){
ByteArrayOutputStream bos = new ByteArrayOutputStream();
mBitmap.compress(CompressFormat.PNG, 0, bos);
byte[] bitmapdata = bos.toByteArray();
return bitmapdata;
}
public void setCanvasData(byte[] canvasData, int w, int h){
mBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
mBitmap.eraseColor(0x00000000);
mCanvas = new Canvas(mBitmap);
mCanvas.drawBitmap(BitmapFactory.decodeByteArray(canvasData , 0, canvasData.length).copy(Bitmap.Config.ARGB_8888, true), 0, 0, null);
}
ADDED SOME EXTRA CODE TO POSSIBLY HELP A LITTLE
public void readInSerialisable() throws IOException
{
FileInputStream fileIn = new FileInputStream("/sdcard/theBKup.ser");
ObjectInputStream in = new ObjectInputStream(fileIn);
try
{
BookData book = (BookData) in.readObject();
pages.clear();
canvasContainer.removeAllViews();
for (int i = 0; i < book.getBook().size(); i++){
Log.d("CREATION", "LOADING PAGE " + i);
pages.add(new Canvas2(context, book.getPageAt(i), canvasContainer.getWidth(), canvasContainer.getHeight()));
}
canvasContainer.addView(pages.get(page), new AbsoluteLayout.LayoutParams(AbsoluteLayout.LayoutParams.FILL_PARENT, AbsoluteLayout.LayoutParams.FILL_PARENT, 0, 0));
updatePagination();
Log.d("CREATION", "Updated Pagination");
}
catch (Exception exc)
{
System.out.println("didnt work");
exc.printStackTrace();
}
}
BookData - Serializable class containing all my data, simple gets/sets in there
onDraw Method
#Override
protected void onDraw(Canvas canvas) {
Log.d("DRAWING", "WE ARE DRAWING");
canvas.drawColor(0x00AAAAAA); //MAKE CANVAS TRANSPARENT
canvas.drawBitmap(mBitmap, 0, 0, mBitmapPaint);
canvas.drawPath(mPath, mPaint);
}
I would do the following 2 tests.
Log some of the byte stream to make sure that it was loaded correctly. Something like Log.v(canvasData[0]+canvasData[1]);, or put a break point there, or something just to make sure the data is correct.
Draw a bitmap that you know is valid, using the same code, and see if it appears correctly.
I'm not sure exactly what's going on, but I strongly suspect one of the following.
The byte stream is not being read in correctly.
The bitmap is not being updated to the screen, or is using a trivially small size.
In the event that your byte stream data has something, then you will want to take a look at the Canvas documentation. Specifically, look at the following bit.
In order to see a Canvas, it has to be put on to a view. Once it is on a view, the onDraw() command must be called for it to be visible. I would make sure that you are in fact doing an onDraw(), and that the Canvas is associated with the View correctly. If you are using an onDraw() already, please post the bits of code associated with it.

Categories