I have quite the annoying problem. I'm building an app where one can share photos. On the SurfaceView where you take the actual photo, the resolution is great. However, when I retrieve that image and display it in a ListView using Picasso, the resolution goes to crap. The pixelation is real. Is there anything that I'm doing horrendously wrong to cause this? The first code snippet below is where I actually save the photo, and the one below that is my getItemView() method in my adapter for the listview. Thanks in advance.
Note that the "photo" variable you see in my code is a Parse subclass I've created to make it easier working with data associated with each photo. I think you can safely ignore it.
EDIT:
SurfaceView of Camera:
Note that I attempt to set the camera parameters to the highest quality allowed. Unfortunately, when I LOG size.width and size.height, I can only get around 176x144. Is there a way to get a higher resolution for supported camera sizes itself?
camera.setDisplayOrientation(90);
Parameters parameters = camera.getParameters();
parameters.set("jpeg-quality", 70);
parameters.setPictureFormat(ImageFormat.JPEG);
List<Camera.Size> sizes = parameters.getSupportedPictureSizes();
Size size = sizes.get(Integer.valueOf((sizes.size()-1)));
parameters.setPictureSize(size.width, size.height);
camera.setParameters(parameters);
camera.setDisplayOrientation(90);
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
parameters.setPreviewSize(size2.width, size2.height);
camera.setPreviewDisplay(holder);
camera.startPreview();
Saving the photo:
// Freeze camera
camera.stopPreview();
// Resize photo
Bitmap mealImage = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap mealImageScaled = Bitmap.createScaledBitmap(mealImage, 640, 640, false);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedScaledMealImage = Bitmap.createBitmap(mealImageScaled, 0,
0, mealImageScaled.getWidth(), mealImageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledMealImage.compress(Bitmap.CompressFormat.JPEG, 100, bos);
byte[] scaledData = bos.toByteArray();
// Save the scaled image to Parse with the date and time as its file name.
DateTime currentTime = new DateTime();
DateTimeFormatter fmt = DateTimeFormat.forPattern("HH MM SS");
photoFile = new ParseFile(currentTime.toString(fmt), scaledData);
photo.setPhotoFile(photoFile);
Displaying it:
final ParseImageView photoView = holder.photoView;
ParseFile photoFile = photo.getParseFile("photo");
Picasso.with(getContext())
.load(photoFile.getUrl())
.into(photoView, new Callback() {
#Override
public void onError() {
}
#Override
public void onSuccess() {
}
});
The problem is not with the Picasso
It because this line of code
parameters.set("jpeg-quality", 70);
and this
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
When you setup the camera you already turned down the quality to the 70% (because based on the Android Documentation the range of jpeq-quality is between 0-100)
And then you also need to check is the size of the camera is correct or not, because you are making assumption with that code
you can try this code to get the best preview size with your preffered width and height
private Camera.Size getBestPreviewSize(int width, int height, Camera.Parameters parameters){
Camera.Size bestSize = null;
List<Camera.Size> sizeList = parameters.getSupportedPreviewSizes();
bestSize = sizeList.get(0);
for(int i = 1; i < sizeList.size(); i++){
if((sizeList.get(i).width * sizeList.get(i).height) >
(bestSize.width * bestSize.height)){
bestSize = sizeList.get(i);
}
}
return bestSize;
}
I hope this answer will help you, if you have another question about my answer you can try to ask me in the comment :)
Related
So i'm using the legacy Camera API (as far as I can tell) to get previewFrame call backs to then run a few machine learning models I have. I have confirmed that the machine learning models work when given a bitmap decoded when I take a picture via the onPictureTaken callback. Right now in the samples below, I am just simply testing on ML Kit's barcode scanner as a base case, but my custom models seemed to work fine with the onPictureTaken callback as well.
From what i've gathered, using onPreviewFrame isn't necessarily the best way to do this, but for the sake of having a quick sample play-around (and learning experience) I decided to just go this route. Based on everything i've tried from others having solutions online, I can't seem to get anything to work properly. The below code returns null:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("onPreviewFrame bytes.length", String.valueOf(bytes.length));
// final Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// Log.d("onPreviewFrame bmp.getHeight()", String.valueOf(bmp.getHeight()));
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Log.d("onPreviewFrame - width", String.valueOf(width));
Log.d("onPreviewFrame - height", String.valueOf(height));
Log.d("onPreviewFrame - parameters.getPreviewFormat()", String.valueOf(parameters.getPreviewFormat()));
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
//
// byte[] bytes = out.toByteArray();
// final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
byte[] bytes = yuv.getYuvData();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
}
Here's something else I tried:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("onPreviewFrame bytes.length", String.valueOf(bytes.length));
// final Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
// Log.d("onPreviewFrame bmp.getHeight()", String.valueOf(bmp.getHeight()));
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Log.d("onPreviewFrame - width", String.valueOf(width));
Log.d("onPreviewFrame - height", String.valueOf(height));
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] bytes = out.toByteArray();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
}
Unfortunately I got this error:
ML Kit has detected that you seem to pass camera frames to the detector as a Bitmap object. This is inefficient. Please use YUV_420_888 format for camera2 API or NV21 format for (legacy) camera API and directly pass down the byte array to ML Kit.
with parameters.getPreviewFormat() returning 17 which is NV21. I also tried simply by changing that to ImageFormat.YUV_420_888 but that resulted in the below illegal argument exception:
only support ImageFormat.NV21 and ImageFormat.YUY2 for now
Instead of using the Camera API, try using CameraX. It's easy to use and you can execute your code whenever a frame is received from the camera. While trying to integrate an ML model with the camera, I faced a similar error and then turned to CameraX.
Basically, we'll create an ImageAnalysis.Analyser class through which we would get the Image object ( frames ). Using an extension function, we will convert this Image object to a YuvImage.
You can follow this codelab to use CameraX to analyze frames. You will create a class that extends ImageAnalysis.Analyser class.
class FrameAnalyser() : ImageAnalysis.Analyzer {
override fun analyze(image: ImageProxy?, rotationDegrees: Int) {
val yuvImage = image?.image?.toYuv() // The extension function
}
}
Create the extension function which transforms the Image to a YuvImage.
private fun Image.toYuv(): YuvImage {
val yBuffer = planes[0].buffer
val uBuffer = planes[1].buffer
val vBuffer = planes[2].buffer
val ySize = yBuffer.remaining()
val uSize = uBuffer.remaining()
val vSize = vBuffer.remaining()
val nv21 = ByteArray(ySize + uSize + vSize)
yBuffer.get(nv21, 0, ySize)
vBuffer.get(nv21, ySize, vSize)
uBuffer.get(nv21, ySize + vSize, uSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, this.width, this.height, null)
return yuvImage
}
You can change the YUV Image format as required. Refer to these docs.
Insted of directly passing FirebaseVisionImage
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
you can do it like this
var bitmap = toARGBBitmap(ocrBitmap)
extractBarcode(FirebaseVisionImage.fromBitmap(bitmap), bitmap);
private fun toARGBBitmap(img: Bitmap): Bitmap {
return img.copy(Bitmap.Config.ARGB_8888, true)
}
You can try this:)
I have a problem with loading a large image.
I have to make a map/background with a size of 3556 x 2000 pixels.
I try this:
https://developer.android.com/topic/performance/graphics/load-bitmap.html
But it looks like it does not work properly for me. (Exception: out of memory)
This is my background:
scr.hu/8p0mdz - Screenshooter
I marked with black square the area that is visible on the phone for user. Of course, user can zoom in or out the visible area.
I can't use libgdx. I want to use only android libraries. I have no idea how i should start my work. I don't ask for a code(i will gladly accept the code), but i want to find out what i should start with.
In this background, will be drawn other images(buildings). When game will be start, resources need to be loaded into memory. In libgdx i can use AssetManager. In my case when i use
https://developer.android.com/reference/android/content/res/AssetManager.html
it should be enough?
I hope you understand my problem.
This is not a trivial problem, so I think there is no easy solution. But there're few things which could help:
You can control amount of allocated memory for your bitmap:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
options.inPreferredConfig = Bitmap.Config.RGB_565;
//RGB_565 color format requires way less memory than ARGB_8888
There is BitmapRegionDecoder class which loads just a part of bitmap.
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance("path", false);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap partOfBitmap = decoder.decodeRegion(new Rect(0, 0, 100, 100), options);
Using BitmapRegionDecoder you can develop your custom View which handles user scroll events and loads into memory only visible image regions and destroys not visible.
I try to solve this as above:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_planet);
int WIDTH = 0;
int HEIGHT = 0;
backgroundJungle = (ImageView)findViewById(R.id.backgroundJungle);
InputStream is = null;
try {
Drawable drawable = getResources().getDrawable(R.drawable.junglemap);
BitmapDrawable bitmapDrawable = (BitmapDrawable) drawable;
Bitmap bitmap = bitmapDrawable.getBitmap();
WIDTH = bitmap.getWidth();
HEIGHT = bitmap.getHeight();
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
is = new ByteArrayInputStream(stream.toByteArray());
} catch (Exception ex) {
ManagerException(ex);
}
try {
if (is != null) {
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance(is, false);
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.RGB_565;
Bitmap partOfBitmap = decoder.decodeRegion(new Rect(0,0,2500,2500),options);
backgroundJungle.setImageBitmap(partOfBitmap);
Toast.makeText(getApplicationContext(), WIDTH + " " + HEIGHT, Toast.LENGTH_SHORT).show();
}
} catch (IOException ex) {
ManagerException(ex);
}
}
But bitmap has size 9335 x 5250, but my image has 3556 x 2000.
I am stack with this problem for a couple of days. I want to make an android app that takes a picture and extracts HOG features of that image for future processing. The problem is that the code below always returns the HOG descriptors with rezo values.
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Log.i(TAG, "Saving a bitmap to file");
// The camera preview was automatically stopped. Start it again.
mCamera.startPreview();
mCamera.setPreviewCallback(this);
this.disableView();
Bitmap bitmapPicture = BitmapFactory.decodeByteArray(data, 0, data.length);
myImage = new Mat(bitmapPicture.getWidth(), bitmapPicture.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(bitmapPicture, myImage);
Bitmap bm = Bitmap.createBitmap(myImage.cols(), myImage.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(myImage.clone(), bm);
// find the imageview and draw it!
ImageView iv = (ImageView) getRootView().findViewById(R.id.imageView);
this.setVisibility(SurfaceView.GONE);
iv.setVisibility(ImageView.VISIBLE);
Mat forHOGim = new Mat();
org.opencv.core.Size sz = new org.opencv.core.Size(64,128);
Imgproc.resize( myImage, myImage, sz );
Imgproc.cvtColor(myImage,forHOGim,Imgproc.COLOR_RGB2GRAY);
//forHOGim = myImage.clone();
MatOfFloat descriptors = new MatOfFloat(); //an empty vector of descriptors
org.opencv.core.Size winStride = new org.opencv.core.Size(64/2,128/2); //50% overlap in the sliding window
org.opencv.core.Size padding = new org.opencv.core.Size(0,0); //no padding around the image
MatOfPoint locations = new MatOfPoint(); ////an empty vector of locations, so perform full search
//HOGDescriptor hog = new HOGDescriptor();
HOGDescriptor hog = new HOGDescriptor(sz,new org.opencv.core.Size(16,16),new org.opencv.core.Size(8,8),new org.opencv.core.Size(8,8),9);
Log.i(TAG,"Constructed");
hog.compute(forHOGim , descriptors, new org.opencv.core.Size(16,16), padding, locations);
Log.i(TAG,"Computed");
Log.i(TAG,String.valueOf(hog.getDescriptorSize())+" "+descriptors.size());
Log.i(TAG,String.valueOf(descriptors.get(12,0)[0]));
double dd=0.0;
for (int i=0;i<3780;i++){
if (descriptors.get(i,0)[0]!=dd) Log.i(TAG,"NOT ZERO");
}
Bitmap bm2 = Bitmap.createBitmap(forHOGim.cols(), forHOGim.rows(),Bitmap.Config.ARGB_8888);
Utils.matToBitmap(forHOGim,bm2);
iv.setImageBitmap(bm2);
}
So in the logcat I never get the NOT ZERO message. The problem is that whatever changes I do to this code I always have zeros in the descriptors MatOfFloat... And the strange part is, if I uncomment the HOGDescriptor hog = new HOGDescriptor(); and use that one instead of the one I am using now, my application crashes...
The rest of the code runs fine, the picture is always taken and displayed on my image view as I expect.
Any help will be appreciated.
Thanks in advance.
The problem was inside the library. When I executed the same code with OpenCV 2.4.13 for Linux and not for Android, the code worked great as expected. So I hope they will fix any problems with the HOGDescriptor for OpenCV4Android.
I want to take only a part of the the screen data from a preview video callback to reduce the time of the process. The probleme is I only know how to take the whole screen with OnPreviewFrame:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
myData = data;
// +get camera resolution x, y
}
And then with this data get the image :
private Bitmap getBitmapFromYUV(byte[] data, int width, int height)
{
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
return image;
}
And then I take the part of the image taken I want :
cutImage = Bitmap.createBitmap(image, xOffset, yOffset, customWidth, customHeight);
The problem is that I need to take lots of images to apply some image processing on it and that's why I want to reduce the time it takes to get the images. Instead of taking the whole screen and then crop it, I want to immediatly get the cropped image. Is there a way to get the part of the screen data ?
Ok I finally found something, I still record all the data of the camera but when using compressToJpeg I crop the picture with a custom Rect. Maybe there is something better to do before this but this is still a good improvement. Here are my changes :
yuvImage.compressToJpeg(new Rect(offsetX, offsetY, sizeCaptureX + offsetX, sizeCaptureY + offsetY ), 100, out);
I am making an OCR app for Android, that will take a screenshot of some text, recognise it and search a key word on Google. If you haven't already realized, I'm trying to make a "Google Now on Tap" clone.
To make the OCR work better, I am first rotating the image, then filtering the image. First by getting rid of the status bar and the navigation bar, then converting it to grayscale, then sharpening.
But the image quality after filtering the image is extremely pixelated, and this greatly effects OCR accuracy.
Here are the images, before and after (just of an IFTTT email I got)
As you can see, the before image is much higher quality than the filtered and rotated one.
Here is my code for rotating, filtering and saving the image:
Firstly taking screenshot, then saving the screenshot.
public void getScreenshot()
{
try
{
Process sh = Runtime.getRuntime().exec("su", null, null);
OutputStream os = sh.getOutputStream();
os.write(("/system/bin/screencap -p " + _path).getBytes("ASCII"));
os.flush();
os.close();
sh.waitFor();
onPhotoTaken();
Toast.makeText(this, "Screenshot taken", Toast.LENGTH_SHORT).show();
}
catch (IOException e)
{
System.out.println("IOException");
}
catch (InterruptedException e)
{
System.out.println("InterruptedException");
}
}
Then, rotate the image:
protected void onPhotoTaken() {
_taken = true;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap bitmap = BitmapFactory.decodeFile(_path, options);
try {
ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
Log.v(TAG, "Orient: " + exifOrientation);
int rotate = 0;
switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}
Log.v(TAG, "Rotation: " + rotate);
if (rotate != 0) {
// Getting width & height of the given image.
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);
// Rotating Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
}
// Convert to ARGB_8888, required by tess
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
} catch (IOException e) {
Log.e(TAG, "Couldn't correct orientation: " + e.toString());
}
// _image.setImageBitmap( bitmap );
setImageFilters(bitmap);
}
Then, filter the image:
public void setImageFilters(Bitmap bmpOriginal)
{
//Start by cropping image
Bitmap croppedBitmap = ThumbnailUtils.extractThumbnail(bmpOriginal, 1080, 1420);
//Then convert to grayscale
int width, height;
height = 1420;
width = 1080;
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(croppedBitmap, 0, 0, paint);
//Finally, sharpen the image
double weight = 11;
double[][] sharpConfig = new double[][]
{
{ 0 , -2 , 0 },
{ -2, weight, -2 },
{ 0 , -2 , 0 }
};
ConvolutionMatrix convMatrix = new ConvolutionMatrix(3);
convMatrix.applyConfig(sharpConfig);
convMatrix.Factor = weight - 8;
Bitmap filteredBitmap = ConvolutionMatrix.computeConvolution3x3(bmpGrayscale, convMatrix);
//Start Optical Character Recognition
startOCR(filteredBitmap);
//Save filtered image
saveFiltered(filteredBitmap);
}
Then, saving the filtered and rotated image:
public void saveFiltered(Bitmap filteredBmp) {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
filteredBmp.compress(Bitmap.CompressFormat.JPEG, 20, bytes);
//You can create a new file name "test.jpg" in sdcard folder.
File f = new File("/sdcard/SimpleAndroidOCR/ocrgray.jpg");
f.createNewFile();
//Write the bytes in file
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
//Remember close the FileOutput
fo.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks heaps for anyone taking the time to help.
It was actually in my onPhotoTaken method. After taking and saving the screenshot in get screenshot, I am reading the file from the location it was saved to, then filtering it. I changed this line in the onPhotoTaken method:
options.inSampleSize = 4 to options.inSampleSize = 1
It does look like the jpeg compression is messing the image up. Try using a format better suited for images with sharp edges, such as of text. I would recommend png or even gif. You could also store the uncompressed BMP.
Jpeg compression works by exploiting the fact that in most pictures (nature, people, objects), sharp edges are not that visible to the human eye. This makes it really bad for storing sharp edged content, such as text.
Also, your image filter is effectively removing the anti-aliasing of the image, which further decreases the perceived image quality. That might be what you want to do, however, since it might make OCR easier.
I also missed the sampling size due to the images you uploaded being the same size here on the site. From the Android documentation:
If set to a value > 1, requests the decoder to subsample the original
image, returning a smaller image to save memory. The sample size is
the number of pixels in either dimension that correspond to a single
pixel in the decoded bitmap. For example, inSampleSize == 4 returns an
image that is 1/4 the width/height of the original, and 1/16 the
number of pixels. Any value <= 1 is treated the same as 1. Note: the
decoder uses a final value based on powers of 2, any other value will
be rounded down to the nearest power of 2.
Setting options.inSampleSize = 4; to 1 instead will increase the quality.