I am trying to take a screen shot of all the content of a scrollview(off page as well). The problem I am having is that when it get's to the line Bitmap b = u.getDrawingCache(); an exception is thrown "nullPointerException". I can't seems to figure out what is causing this. This is the code I have:
if(v.getId() == R.id.btnSnap)
{
try
{
View u = findViewById(R.id.svContent);
u.setDrawingCacheEnabled(true);
ScrollView z = (ScrollView) findViewById(R.id.svContent);
int totalHeight = z.getChildAt(0).getHeight();
int totalWidth = z.getChildAt(0).getWidth();
u.layout(0, 0, totalWidth, totalHeight);
u.buildDrawingCache(true);
Bitmap b = u.getDrawingCache();
//Just so that I can get a quick preview of
//what the picture looks like
ImageView img = (ImageView)findViewById(R.id.ivTest);
img.setImageBitmap(b);
//Save the picture
File fileName = new File(context.getExternalFilesDir(null)+"/ArcFlash/SafetyCheckList1.png");
FileOutputStream out = new FileOutputStream(fileName);
b.compress(Bitmap.CompressFormat.PNG, 90, out);
out.close();
}
catch(Exception ex)
{
handler.Log(ex.getMessage(), "onClick(SafetyCheckList.java)", context);
}
}
EDIT The problem seems to be at u.layout(0, 0, totalWidth, totalHeight);
Related
here is a code to take a screenshot .. but the problem is it takes a screenshot for the application only .. not the whole screen (back and home button and notification bar )
is there any way that I can take a screenshot for the whole screen not only the application
private void takeScreenshot() {
Date now = new Date();
android.text.format.DateFormat.format("yyyy-MM-dd_hh:mm:ss", now);
try {
// image naming and path to include sd card appending name you choose for file
String mPath = Environment.getExternalStorageDirectory().toString() + "/" + now + ".jpg";
// create bitmap screen capture
View v1 = getWindow().getDecorView().getRootView();
v1.setDrawingCacheEnabled(true);
Bitmap bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
File imageFile = new File(mPath);
FileOutputStream outputStream = new FileOutputStream(imageFile);
int quality = 100;
bitmap.compress(Bitmap.CompressFormat.JPEG, quality, outputStream);
outputStream.flush();
outputStream.close();
// openScreenshot(imageFile);
} catch (Throwable e) {
// Several error may come out with file handling or DOM
e.printStackTrace();
}
}
I am adding text to an image using this code in Android :
public Bitmap drawTextToBitmap(Context gContext, Bitmap image, String gText) {
Resources resources = gContext.getResources();
float scale = resources.getDisplayMetrics().density;
android.graphics.Bitmap.Config bitmapConfig =
image.getConfig();
// set default bitmap config if none
if(bitmapConfig == null) {
bitmapConfig = android.graphics.Bitmap.Config.ARGB_8888;
}
// resource bitmaps are imutable,
// so we need to convert it to mutable one
Bitmap bitmap = null;
try{
bitmap = image.copy(bitmapConfig, true);
image.recycle();
Canvas canvas = new Canvas(bitmap);
// new antialised Paint
Paint paint = new Paint(Paint.ANTI_ALIAS_FLAG);
// text color - #3D3D3D
paint.setColor(Color.WHITE);
// text size in pixels
paint.setTextSize((int) (50 * scale));
// text shadow
paint.setShadowLayer(1f, 0f, 1f, Color.BLACK);
// draw text to the Canvas center
Rect bounds = new Rect();
paint.getTextBounds(gText, 0, gText.length(), bounds);
int padding = bounds.height()/2;
int x = bitmap.getWidth() - (bounds.width()+padding);
int y = (bitmap.getHeight() - (bounds.height()+padding));
canvas.drawText(gText, x, y, paint);
}catch (Throwable e){
AppLog.e("DrawingBitmap","error while adding timestamp",e);
}
return bitmap;
}
Then I create a new File with the transformed bitmap
storeImage(newBitmap, newFileName);
private File storeImage(Bitmap image, String nameFile) {
File pictureFile = new File(getExternalCacheDir(), nameFile);
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
image.compress(Bitmap.CompressFormat.JPEG, 100, fos);
fos.close();
} catch (FileNotFoundException e) {
AppLog.e("error creating bitmap", "File not found: " + e.getMessage());
} catch (IOException e) {
AppLog.e("error creating bitmap", "Error accessing file: " + e.getMessage());
}
return pictureFile;
}
I send the file to my server, I receive an input stream, I create a File, I scale it and I create a new File with the scaled image :
ImageWriter.write(metadata, new IIOImage(image, null, metadata), param);
I get an IIOException:
javax.imageio.IIOException: Missing Huffman code table entry
at com.sun.imageio.plugins.jpeg.JPEGImageWriter.writeImage(Native Method)
at com.sun.imageio.plugins.jpeg.JPEGImageWriter.writeOnThread(JPEGImageWriter.java:1067)
at com.sun.imageio.plugins.jpeg.JPEGImageWriter.write(JPEGImageWriter.java:363)
at com.twelvemonkeys.imageio.plugins.jpeg.JPEGImageWriter.write(JPEGImageWriter.java:162)
if I don't call drawTextToBitmap() from android I don't get that error.
If someone can help me ... thx
EDIT : here is the way I use to get metadata from my file
private static IIOMetadata readMetaData(File source) {
try {
ImageInputStream stream = ImageIO.createImageInputStream(source);
Iterator<ImageReader> readers = ImageIO.getImageReaders(stream);
IIOMetadata metadata = null;
if (readers.hasNext()) {
ImageReader reader = readers.next();
reader.setInput(stream);
metadata = reader.getImageMetadata(0);
}
return metadata;
}catch (Exception e){
_logger.error(e);
e.printStackTrace();
}
return null;
}
Edit 2 :
Using jpegParams.setOptimizeHuffmanTables(true); works but it resets all metadata and I want to keep them like gps location ...
ImageWriteParam param = writer.getDefaultWriteParam();
param.setCompressionMode(ImageWriteParam.MODE_EXPLICIT);
param.setCompressionQuality(quality);
if (param instanceof JPEGImageWriteParam) {
((JPEGImageWriteParam) param).setOptimizeHuffmanTables(true);
}
IIOImage image = new IIOImage(reader.read(0), null, reader.getImageMetadata(0));
writer.write(null, image, param);
here's my code, which keeps the metadata of image, and get rid of "missing huffman code table" stuff.
My Requirement is, after opening the application there will be a button which will open the camera application and takes picture of the two barcodes then that bitmap will be given to my application and the barcode decoder will decode that two barcodes and displays result !!
Is it possible?
if yes then how?
Sure it is possible.
There are a lot of barcode decoding libraries like https://github.com/zxing/zxing/
Integrate the library, follow its tutorial and pass the bitmap to it. Then after decoding display the result.
You Can, Try MultiFormatReader from ZXing Librery
Sample Code
try
{
InputStream inputStream = activity.getContentResolver().openInputStream(uri);
Bitmap bitmap = BitmapFactory.decodeStream(inputStream);
if (bitmap == null)
{
Log.e(TAG, "uri is not a bitmap," + uri.toString());
return null;
}
int width = bitmap.getWidth(), height = bitmap.getHeight();
int[] pixels = new int[width * height];
bitmap.getPixels(pixels, 0, width, 0, 0, width, height);
bitmap.recycle();
bitmap = null;
RGBLuminanceSource source = new RGBLuminanceSource(width, height, pixels);
BinaryBitmap bBitmap = new BinaryBitmap(new HybridBinarizer(source));
MultiFormatReader reader = new MultiFormatReader();
try
{
Result result = reader.decode(bBitmap);
return result;
}
catch (NotFoundException e)
{
Log.e(TAG, "decode exception", e);
return null;
}
}
catch (FileNotFoundException e)
{
Log.e(TAG, "can not open file" + uri.toString(), e);
return null;
}
I am making an OCR app for Android, that will take a screenshot of some text, recognise it and search a key word on Google. If you haven't already realized, I'm trying to make a "Google Now on Tap" clone.
To make the OCR work better, I am first rotating the image, then filtering the image. First by getting rid of the status bar and the navigation bar, then converting it to grayscale, then sharpening.
But the image quality after filtering the image is extremely pixelated, and this greatly effects OCR accuracy.
Here are the images, before and after (just of an IFTTT email I got)
As you can see, the before image is much higher quality than the filtered and rotated one.
Here is my code for rotating, filtering and saving the image:
Firstly taking screenshot, then saving the screenshot.
public void getScreenshot()
{
try
{
Process sh = Runtime.getRuntime().exec("su", null, null);
OutputStream os = sh.getOutputStream();
os.write(("/system/bin/screencap -p " + _path).getBytes("ASCII"));
os.flush();
os.close();
sh.waitFor();
onPhotoTaken();
Toast.makeText(this, "Screenshot taken", Toast.LENGTH_SHORT).show();
}
catch (IOException e)
{
System.out.println("IOException");
}
catch (InterruptedException e)
{
System.out.println("InterruptedException");
}
}
Then, rotate the image:
protected void onPhotoTaken() {
_taken = true;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap bitmap = BitmapFactory.decodeFile(_path, options);
try {
ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
Log.v(TAG, "Orient: " + exifOrientation);
int rotate = 0;
switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}
Log.v(TAG, "Rotation: " + rotate);
if (rotate != 0) {
// Getting width & height of the given image.
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);
// Rotating Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
}
// Convert to ARGB_8888, required by tess
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
} catch (IOException e) {
Log.e(TAG, "Couldn't correct orientation: " + e.toString());
}
// _image.setImageBitmap( bitmap );
setImageFilters(bitmap);
}
Then, filter the image:
public void setImageFilters(Bitmap bmpOriginal)
{
//Start by cropping image
Bitmap croppedBitmap = ThumbnailUtils.extractThumbnail(bmpOriginal, 1080, 1420);
//Then convert to grayscale
int width, height;
height = 1420;
width = 1080;
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(croppedBitmap, 0, 0, paint);
//Finally, sharpen the image
double weight = 11;
double[][] sharpConfig = new double[][]
{
{ 0 , -2 , 0 },
{ -2, weight, -2 },
{ 0 , -2 , 0 }
};
ConvolutionMatrix convMatrix = new ConvolutionMatrix(3);
convMatrix.applyConfig(sharpConfig);
convMatrix.Factor = weight - 8;
Bitmap filteredBitmap = ConvolutionMatrix.computeConvolution3x3(bmpGrayscale, convMatrix);
//Start Optical Character Recognition
startOCR(filteredBitmap);
//Save filtered image
saveFiltered(filteredBitmap);
}
Then, saving the filtered and rotated image:
public void saveFiltered(Bitmap filteredBmp) {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
filteredBmp.compress(Bitmap.CompressFormat.JPEG, 20, bytes);
//You can create a new file name "test.jpg" in sdcard folder.
File f = new File("/sdcard/SimpleAndroidOCR/ocrgray.jpg");
f.createNewFile();
//Write the bytes in file
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
//Remember close the FileOutput
fo.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks heaps for anyone taking the time to help.
It was actually in my onPhotoTaken method. After taking and saving the screenshot in get screenshot, I am reading the file from the location it was saved to, then filtering it. I changed this line in the onPhotoTaken method:
options.inSampleSize = 4 to options.inSampleSize = 1
It does look like the jpeg compression is messing the image up. Try using a format better suited for images with sharp edges, such as of text. I would recommend png or even gif. You could also store the uncompressed BMP.
Jpeg compression works by exploiting the fact that in most pictures (nature, people, objects), sharp edges are not that visible to the human eye. This makes it really bad for storing sharp edged content, such as text.
Also, your image filter is effectively removing the anti-aliasing of the image, which further decreases the perceived image quality. That might be what you want to do, however, since it might make OCR easier.
I also missed the sampling size due to the images you uploaded being the same size here on the site. From the Android documentation:
If set to a value > 1, requests the decoder to subsample the original
image, returning a smaller image to save memory. The sample size is
the number of pixels in either dimension that correspond to a single
pixel in the decoded bitmap. For example, inSampleSize == 4 returns an
image that is 1/4 the width/height of the original, and 1/16 the
number of pixels. Any value <= 1 is treated the same as 1. Note: the
decoder uses a final value based on powers of 2, any other value will
be rounded down to the nearest power of 2.
Setting options.inSampleSize = 4; to 1 instead will increase the quality.
I have been augmenting the QR scanning library Zxing to save a photo instantly upon scan. I was advised to do so within the onPreviewFrame method within PreviewCallback.java as thus:
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
YuvImage im = new YuvImage(data, ImageFormat.NV21, 1200,
800, null);
Rect r = new Rect(0, 0, 1200, 800);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
if (cameraResolution != null && thePreviewHandler != null) {
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
The result of running this code is a corrupt image at the set file directory. I believe this is due to the code being run every frame. Is there a way to limit this to every second or so if that will allow the full image to save, or is there a method I can use to cause the image to only save upon completed scan.
I have a less favourable working alternative, in that I can successfully save the black and white image that is shown upon scan; colour is the preferable option of course.
Update: Code changed to (in theory) accommodate camera resolution on any device. Image is still corrupt.
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
Handler thePreviewHandler = previewHandler;
android.hardware.Camera.Parameters parameters = camera.getParameters();
android.hardware.Camera.Size size = parameters.getPictureSize();
int height = size.height;
int width = size.width;
YuvImage im = new YuvImage(data, ImageFormat.NV21, width,
height, null);
Rect r = new Rect(0, 0, width, height);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
im.compressToJpeg(r, 50, baos);
try {
FileOutputStream output = new FileOutputStream("/sdcard/test_jpg.jpg");
output.write(baos.toByteArray());
output.flush();
output.close();
System.out.println("Attempting to save file");
System.out.println(data);
} catch (FileNotFoundException e) {
System.out.println("Saving to file failed");
} catch (IOException e) {
System.out.println("Saving to file failed");
}
if (cameraResolution != null && thePreviewHandler != null) {
Message message = thePreviewHandler.obtainMessage(previewMessage, cameraResolution.x,
cameraResolution.y, data);
message.sendToTarget();
previewHandler = null;
} else {
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}}
Having checked the width and height variable are correctly set to the Nexus 7's camera width and height of 1280x960. I am confident the issue comes from attempting to save the image every frame, as "Attempting to save to file" appears somewhat rapidly within the logcat; several times a second. It may also be worth noting that the corrupt image saved is square(ish).
Thanks in advance.
1200 x 800? are you sure this is your preview size? check your parameters. Probably it's 1280 x 720