I have a camera that I use to take pictures and my problem is on the onPictureTaken function:
public Camera.PictureCallback jpeghandler = new Camera.PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
Options options = new BitmapFactory.Options();
options.inScaled = false;
options.inDither = false;
options.inSampleSize = 2;
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
photo = BitmapFactory.decodeByteArray(data, 0, data.length, options);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotate_bitmap = Bitmap.createBitmap(photo , 0, 0, photo.getWidth(), photo.getHeight(), matrix, true);
photo.recycle();
photo = Bitmap.createBitmap(rotate_bitmap, 0, rotate_bitmap.getHeight()/2 - rotate_bitmap.getWidth()/2, rotate_bitmap.getWidth(), rotate_bitmap.getWidth());
Intent resultIntent = new Intent();
String image_url = MediaStore.Images.Media.insertImage(getContentResolver(), photo, "photo" , null);
resultIntent.putExtra("Photo_Taken", image_url);
setResult(Activity.RESULT_OK, resultIntent);
photo.recycle();
rotate_bitmap.recycle();
finish();
}
};
I have another activity that opens this one and then I take a picture, closing this one and passing the location of the image back to the first activity. The problem is that when I reopen this activity to take another picture it gives me an Out Of Memory error on this line:
Bitmap rotate_bitmap = Bitmap.createBitmap(photo , 0, 0, photo.getWidth(), photo.getHeight(), matrix, true);
What is wrong and what is happening? Cause the photo size is always under 25Mb and then in the second photo it appears this: Out of memory on a 13128976-byte allocation.
Thank you!
Your heap gets full [ (image data) + (other data) > (heap size) ]. I believe you have two instances of your photo stored on heap (when second activity is launched, data (including photo) from first activity is not yet cleared from memory).
If you're using Eclipse I strongly recommend installing Memory Analyzer (MAT)
Check my question for more info.
Related
I have a problem with loading a large image.
I have to make a map/background with a size of 3556 x 2000 pixels.
I try this:
https://developer.android.com/topic/performance/graphics/load-bitmap.html
But it looks like it does not work properly for me. (Exception: out of memory)
This is my background:
scr.hu/8p0mdz - Screenshooter
I marked with black square the area that is visible on the phone for user. Of course, user can zoom in or out the visible area.
I can't use libgdx. I want to use only android libraries. I have no idea how i should start my work. I don't ask for a code(i will gladly accept the code), but i want to find out what i should start with.
In this background, will be drawn other images(buildings). When game will be start, resources need to be loaded into memory. In libgdx i can use AssetManager. In my case when i use
https://developer.android.com/reference/android/content/res/AssetManager.html
it should be enough?
I hope you understand my problem.
This is not a trivial problem, so I think there is no easy solution. But there're few things which could help:
You can control amount of allocated memory for your bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.inPreferredConfig = Bitmap.Config.RGB_565;
//RGB_565 color format requires way less memory than ARGB_8888
There is BitmapRegionDecoder class which loads just a part of bitmap.
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance("path", false);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap partOfBitmap = decoder.decodeRegion(new Rect(0, 0, 100, 100), options);
Using BitmapRegionDecoder you can develop your custom View which handles user scroll events and loads into memory only visible image regions and destroys not visible.
I try to solve this as above:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_planet);
int WIDTH = 0;
int HEIGHT = 0;
backgroundJungle = (ImageView)findViewById(R.id.backgroundJungle);
InputStream is = null;
try {
Drawable drawable = getResources().getDrawable(R.drawable.junglemap);
BitmapDrawable bitmapDrawable = (BitmapDrawable) drawable;
Bitmap bitmap = bitmapDrawable.getBitmap();
WIDTH = bitmap.getWidth();
HEIGHT = bitmap.getHeight();
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
is = new ByteArrayInputStream(stream.toByteArray());
} catch (Exception ex) {
ManagerException(ex);
}
try {
if (is != null) {
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance(is, false);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap partOfBitmap = decoder.decodeRegion(new Rect(0,0,2500,2500),options);
backgroundJungle.setImageBitmap(partOfBitmap);
Toast.makeText(getApplicationContext(), WIDTH + " " + HEIGHT, Toast.LENGTH_SHORT).show();
}
} catch (IOException ex) {
ManagerException(ex);
}
}
But bitmap has size 9335 x 5250, but my image has 3556 x 2000.
I am stack with this problem for a couple of days. I want to make an android app that takes a picture and extracts HOG features of that image for future processing. The problem is that the code below always returns the HOG descriptors with rezo values.
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Log.i(TAG, "Saving a bitmap to file");
// The camera preview was automatically stopped. Start it again.
mCamera.startPreview();
mCamera.setPreviewCallback(this);
this.disableView();
Bitmap bitmapPicture = BitmapFactory.decodeByteArray(data, 0, data.length);
myImage = new Mat(bitmapPicture.getWidth(), bitmapPicture.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(bitmapPicture, myImage);
Bitmap bm = Bitmap.createBitmap(myImage.cols(), myImage.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(myImage.clone(), bm);
// find the imageview and draw it!
ImageView iv = (ImageView) getRootView().findViewById(R.id.imageView);
this.setVisibility(SurfaceView.GONE);
iv.setVisibility(ImageView.VISIBLE);
Mat forHOGim = new Mat();
org.opencv.core.Size sz = new org.opencv.core.Size(64,128);
Imgproc.resize( myImage, myImage, sz );
Imgproc.cvtColor(myImage,forHOGim,Imgproc.COLOR_RGB2GRAY);
//forHOGim = myImage.clone();
MatOfFloat descriptors = new MatOfFloat(); //an empty vector of descriptors
org.opencv.core.Size winStride = new org.opencv.core.Size(64/2,128/2); //50% overlap in the sliding window
org.opencv.core.Size padding = new org.opencv.core.Size(0,0); //no padding around the image
MatOfPoint locations = new MatOfPoint(); ////an empty vector of locations, so perform full search
//HOGDescriptor hog = new HOGDescriptor();
HOGDescriptor hog = new HOGDescriptor(sz,new org.opencv.core.Size(16,16),new org.opencv.core.Size(8,8),new org.opencv.core.Size(8,8),9);
Log.i(TAG,"Constructed");
hog.compute(forHOGim , descriptors, new org.opencv.core.Size(16,16), padding, locations);
Log.i(TAG,"Computed");
Log.i(TAG,String.valueOf(hog.getDescriptorSize())+" "+descriptors.size());
Log.i(TAG,String.valueOf(descriptors.get(12,0)[0]));
double dd=0.0;
for (int i=0;i<3780;i++){
if (descriptors.get(i,0)[0]!=dd) Log.i(TAG,"NOT ZERO");
}
Bitmap bm2 = Bitmap.createBitmap(forHOGim.cols(), forHOGim.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(forHOGim,bm2);
iv.setImageBitmap(bm2);
}
So in the logcat I never get the NOT ZERO message. The problem is that whatever changes I do to this code I always have zeros in the descriptors MatOfFloat... And the strange part is, if I uncomment the HOGDescriptor hog = new HOGDescriptor(); and use that one instead of the one I am using now, my application crashes...
The rest of the code runs fine, the picture is always taken and displayed on my image view as I expect.
Any help will be appreciated.
Thanks in advance.
The problem was inside the library. When I executed the same code with OpenCV 2.4.13 for Linux and not for Android, the code worked great as expected. So I hope they will fix any problems with the HOGDescriptor for OpenCV4Android.
I am making an OCR app for Android, that will take a screenshot of some text, recognise it and search a key word on Google. If you haven't already realized, I'm trying to make a "Google Now on Tap" clone.
To make the OCR work better, I am first rotating the image, then filtering the image. First by getting rid of the status bar and the navigation bar, then converting it to grayscale, then sharpening.
But the image quality after filtering the image is extremely pixelated, and this greatly effects OCR accuracy.
Here are the images, before and after (just of an IFTTT email I got)
As you can see, the before image is much higher quality than the filtered and rotated one.
Here is my code for rotating, filtering and saving the image:
Firstly taking screenshot, then saving the screenshot.
public void getScreenshot()
{
try
{
Process sh = Runtime.getRuntime().exec("su", null, null);
OutputStream os = sh.getOutputStream();
os.write(("/system/bin/screencap -p " + _path).getBytes("ASCII"));
os.flush();
os.close();
sh.waitFor();
onPhotoTaken();
Toast.makeText(this, "Screenshot taken", Toast.LENGTH_SHORT).show();
}
catch (IOException e)
{
System.out.println("IOException");
}
catch (InterruptedException e)
{
System.out.println("InterruptedException");
}
}
Then, rotate the image:
protected void onPhotoTaken() {
_taken = true;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap bitmap = BitmapFactory.decodeFile(_path, options);
try {
ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
Log.v(TAG, "Orient: " + exifOrientation);
int rotate = 0;
switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}
Log.v(TAG, "Rotation: " + rotate);
if (rotate != 0) {
// Getting width & height of the given image.
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);
// Rotating Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
}
// Convert to ARGB_8888, required by tess
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
} catch (IOException e) {
Log.e(TAG, "Couldn't correct orientation: " + e.toString());
}
// _image.setImageBitmap( bitmap );
setImageFilters(bitmap);
}
Then, filter the image:
public void setImageFilters(Bitmap bmpOriginal)
{
//Start by cropping image
Bitmap croppedBitmap = ThumbnailUtils.extractThumbnail(bmpOriginal, 1080, 1420);
//Then convert to grayscale
int width, height;
height = 1420;
width = 1080;
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(croppedBitmap, 0, 0, paint);
//Finally, sharpen the image
double weight = 11;
double[][] sharpConfig = new double[][]
{
{ 0 , -2 , 0 },
{ -2, weight, -2 },
{ 0 , -2 , 0 }
};
ConvolutionMatrix convMatrix = new ConvolutionMatrix(3);
convMatrix.applyConfig(sharpConfig);
convMatrix.Factor = weight - 8;
Bitmap filteredBitmap = ConvolutionMatrix.computeConvolution3x3(bmpGrayscale, convMatrix);
//Start Optical Character Recognition
startOCR(filteredBitmap);
//Save filtered image
saveFiltered(filteredBitmap);
}
Then, saving the filtered and rotated image:
public void saveFiltered(Bitmap filteredBmp) {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
filteredBmp.compress(Bitmap.CompressFormat.JPEG, 20, bytes);
//You can create a new file name "test.jpg" in sdcard folder.
File f = new File("/sdcard/SimpleAndroidOCR/ocrgray.jpg");
f.createNewFile();
//Write the bytes in file
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
//Remember close the FileOutput
fo.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks heaps for anyone taking the time to help.
It was actually in my onPhotoTaken method. After taking and saving the screenshot in get screenshot, I am reading the file from the location it was saved to, then filtering it. I changed this line in the onPhotoTaken method:
options.inSampleSize = 4 to options.inSampleSize = 1
It does look like the jpeg compression is messing the image up. Try using a format better suited for images with sharp edges, such as of text. I would recommend png or even gif. You could also store the uncompressed BMP.
Jpeg compression works by exploiting the fact that in most pictures (nature, people, objects), sharp edges are not that visible to the human eye. This makes it really bad for storing sharp edged content, such as text.
Also, your image filter is effectively removing the anti-aliasing of the image, which further decreases the perceived image quality. That might be what you want to do, however, since it might make OCR easier.
I also missed the sampling size due to the images you uploaded being the same size here on the site. From the Android documentation:
If set to a value > 1, requests the decoder to subsample the original
image, returning a smaller image to save memory. The sample size is
the number of pixels in either dimension that correspond to a single
pixel in the decoded bitmap. For example, inSampleSize == 4 returns an
image that is 1/4 the width/height of the original, and 1/16 the
number of pixels. Any value <= 1 is treated the same as 1. Note: the
decoder uses a final value based on powers of 2, any other value will
be rounded down to the nearest power of 2.
Setting options.inSampleSize = 4; to 1 instead will increase the quality.
I have quite the annoying problem. I'm building an app where one can share photos. On the SurfaceView where you take the actual photo, the resolution is great. However, when I retrieve that image and display it in a ListView using Picasso, the resolution goes to crap. The pixelation is real. Is there anything that I'm doing horrendously wrong to cause this? The first code snippet below is where I actually save the photo, and the one below that is my getItemView() method in my adapter for the listview. Thanks in advance.
Note that the "photo" variable you see in my code is a Parse subclass I've created to make it easier working with data associated with each photo. I think you can safely ignore it.
EDIT:
SurfaceView of Camera:
Note that I attempt to set the camera parameters to the highest quality allowed. Unfortunately, when I LOG size.width and size.height, I can only get around 176x144. Is there a way to get a higher resolution for supported camera sizes itself?
camera.setDisplayOrientation(90);
Parameters parameters = camera.getParameters();
parameters.set("jpeg-quality", 70);
parameters.setPictureFormat(ImageFormat.JPEG);
List<Camera.Size> sizes = parameters.getSupportedPictureSizes();
Size size = sizes.get(Integer.valueOf((sizes.size()-1)));
parameters.setPictureSize(size.width, size.height);
camera.setParameters(parameters);
camera.setDisplayOrientation(90);
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
parameters.setPreviewSize(size2.width, size2.height);
camera.setPreviewDisplay(holder);
camera.startPreview();
Saving the photo:
// Freeze camera
camera.stopPreview();
// Resize photo
Bitmap mealImage = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap mealImageScaled = Bitmap.createScaledBitmap(mealImage, 640, 640, false);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedScaledMealImage = Bitmap.createBitmap(mealImageScaled, 0,
0, mealImageScaled.getWidth(), mealImageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledMealImage.compress(Bitmap.CompressFormat.JPEG, 100, bos);
byte[] scaledData = bos.toByteArray();
// Save the scaled image to Parse with the date and time as its file name.
DateTime currentTime = new DateTime();
DateTimeFormatter fmt = DateTimeFormat.forPattern("HH MM SS");
photoFile = new ParseFile(currentTime.toString(fmt), scaledData);
photo.setPhotoFile(photoFile);
Displaying it:
final ParseImageView photoView = holder.photoView;
ParseFile photoFile = photo.getParseFile("photo");
Picasso.with(getContext())
.load(photoFile.getUrl())
.into(photoView, new Callback() {
#Override
public void onError() {
}
#Override
public void onSuccess() {
}
});
The problem is not with the Picasso
It because this line of code
parameters.set("jpeg-quality", 70);
and this
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
When you setup the camera you already turned down the quality to the 70% (because based on the Android Documentation the range of jpeq-quality is between 0-100)
And then you also need to check is the size of the camera is correct or not, because you are making assumption with that code
you can try this code to get the best preview size with your preffered width and height
private Camera.Size getBestPreviewSize(int width, int height, Camera.Parameters parameters){
Camera.Size bestSize = null;
List<Camera.Size> sizeList = parameters.getSupportedPreviewSizes();
bestSize = sizeList.get(0);
for(int i = 1; i < sizeList.size(); i++){
if((sizeList.get(i).width * sizeList.get(i).height) >
(bestSize.width * bestSize.height)){
bestSize = sizeList.get(i);
}
}
return bestSize;
}
I hope this answer will help you, if you have another question about my answer you can try to ask me in the comment :)
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
OutOfMemoryError: bitmap size exceeds VM budget :- Android
Im in the process of writing a program which uses images from the gallery and then displays them in an activity (one image pr. activity). However I've been bumping into this error over and over again for three days straight without doing any progress of eliminating it:
07-25 11:43:36.197: ERROR/AndroidRuntime(346): java.lang.OutOfMemoryError: bitmap size exceeds VM budget
My code flow is as follows:
When the user presses a button an intent is fired leading to the gallery:
Intent galleryIntent = new Intent(Intent.ACTION_GET_CONTENT);
galleryIntent.setType("image/*");
startActivityForResult(galleryIntent, 0);
Once the user has selected an image the image is presented in an imageview:
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical">
<ImageView
android:background="#ffffffff"
android:id="#+id/image"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_gravity="center"
android:maxWidth="250dip"
android:maxHeight="250dip"
android:adjustViewBounds="true"/>
</LinearLayout>
In the onActivityResult method i have:
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if(resultCode == RESULT_OK) {
switch(requestCode) {
case 0: // Gallery
String realPath = getRealPathFromURI(data.getData());
File imgFile = new File(realPath);
Bitmap myBitmap;
try {
myBitmap = decodeFile(imgFile);
Bitmap rotatedBitmap = resolveOrientation(myBitmap);
img.setImageBitmap(rotatedBitmap);
OPTIONS_TYPE = 1;
} catch (IOException e) { e.printStackTrace(); }
insertImageInDB(realPath);
break;
case 1: // Camera
The decodeFile method is from here and the resolveOrientation method just wraps the bitmap into a matrix and turns it 90 degrees clockwise.
I really hope someone can help me resolve this matter.
it is because your bitmap size is large, so reduce the image size manually, or by programmatically
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 8;
Bitmap preview_bitmap = BitmapFactory.decodeFile(mPathName, options);
Your GC does not run. Try getting your Bitmap by pieces
BitmapFactory.Options buffer = new BitmapFactory.Options();
buffer.inSampleSize = 4;
Bitmap bmp = BitmapFactory.decodeFile(path, buffer);
There are many question in Stackoverflow about bitmap size exceeds VM budget so first search about your issue and when you cant find any solution then ask question here
The problem is because your bitmap's size is too large than the VM can handle. For example from your code I can see that you are trying to paste an Image into imageView which you are capturing using Camera. So normally the camera images will be too large in size which will rise this error obviously.
So as others have suggested, you have to compress your image either by sampling it or convert your image into smaller resolution.
For example if your imageView is 100x100 in width and height, you can create a scaled bitmap so that your imageView gets filled exactly. You can do this for that,
Bitmap newImage = Bitmap.createScaledBitmap(bm, 350, 300,true);
or you can sample it in methods what user hotveryspicy have suggested.