Bitmap "images" pass to another activity (Out of memory) - java

please read the full question before marking it as duplicate or down-vote it.
i am developing an app what can slice through a picture and run google vision to recognize text in each chunk or slice of picture and run OCR to detect that the circle bubble is filled or not in the chunk. but when i am slicing the Bitmap image in an array and pass it to other activity for the process it crashes for over use of memory. I know i can compress it but i tried that already (though i did not wanted to compress it since i need to run google vision and may not able to extract text accurately) but it did not work since there are 46 slices of image. How can i do so without uploading on cloud fetch it again for process since it might take long. any alternative solution is very welcome as well. i am stuck on this for quite a while.
import android.content.Intent;.....
public class ProcessesdResult extends AppCompatActivity {
TextView tvProcessedText;
Button btnImageSlice;
Bitmap image;
int chunkNumbers =46;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_processesd_result);
Intent intenttakeattendance = getIntent();
String fname = intenttakeattendance.getStringExtra("fname");
String root = Environment.getExternalStorageDirectory().toString();
File myDir = new File(root);
String photoPath = myDir+"/sams_images/"+ fname;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
image = BitmapFactory.decodeFile(photoPath, options);
btnImageSlice=findViewById(R.id.btnimageslice);
btnImageSlice.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
splitImage(image, chunkNumbers) ;
}
});
}
private void splitImage(Bitmap image, int chunkNumbers) {
//For the number of rows and columns of the grid to be displayed
int rows =23;
int cols =2;
//For height and width of the small image chunks
int chunkHeight,chunkWidth;
//To store all the small image chunks in bitmap format in this list
ArrayList<Bitmap> chunkedImages = new ArrayList<Bitmap>(chunkNumbers);
//Getting the scaled bitmap of the source image
Bitmap scaledBitmap = Bitmap.createScaledBitmap(image, image.getWidth(), image.getHeight(), true);
chunkHeight = image.getHeight()/rows;
chunkWidth = image.getWidth()/cols;
//xCoord and yCoord are the pixel positions of the image chunks
int yCoord = 0;
for(int x=0; x<rows; x++){
int xCoord = 0;
for(int y=0; y<cols; y++){
chunkedImages.add(Bitmap.createBitmap(scaledBitmap, xCoord, yCoord, chunkWidth, chunkHeight));
xCoord += chunkWidth;
}
yCoord += chunkHeight;
}
//Start a new activity to show these chunks into a grid
Intent intent = new Intent(ProcessesdResult.this, ChunkedImageActivity.class);
intent.putParcelableArrayListExtra("image chunks", chunkedImages);
startActivity(intent);
}
}
This is the image type i want to slice in pieces

You dont want to pass objects between activities, especially not huge objects like bitmaps. I would suggest saving your bitmaps in the devices file system and then passing a list of URI's. Saving the bitmaps like this and recycling your bitmaps after you are done using them should also reduce the RAM usage during your loop where you slice up the image.
For saving bitmaps as files i would refer to this question: Saving and Reading Bitmaps/Images from Internal memory in Android
So basically your loop should look like this:
for(int x=0; x<rows; x++){
int xCoord = 0;
for(int y=0; y<cols; y++){
Bitmap image = Bitmap.createBitmap(scaledBitmap, xCoord, yCoord, chunkWidth, chunkHeight);
Uri uri = saveBitmapAsFile(image);
image.recycle();
xCoord += chunkWidth;
}
yCoord += chunkHeight;
}

use android:largeHeap="true" in manifest if you still get the same error then try this :
instead of sending "intent.putParcelableArrayListExtra("image chunks", chunkedImages);"
bitmap to another activity, save that image to local storage and use path wherever you want.

I recommended for create another data(Bitmap) store class with static.
Use this class for save bitmaps and call another activity for read.
This link helpful.

Related

Get single pixel of TIFF image on Android

I ideally need to have offline access to information of the altitude of the terrain of Mexico on an Android app I'm developing. I downloaded a .bil file and converted it to a .tif file with QGIS, and the resulting file is almost 900 MB.
I don'I know if it would work, I'm still learning to develop Android apps, but I was planning to store it in the SD card and I was wondering if it could be possible to access to a single pixel without reading the whole image, because I know that's impossible.
Can anyone tell me if it is possible? And if it is, how to do it? Or any other way to get the information I need maybe converting the .bil file to other format or something like that.
Thanks for answering.
Tiff has images stored as rows of bytes starting at defined offsets. So you can easily retrieve single pixel and certainly no need to load the full image.
If you open any tif file in hex editor you will see that first 4 bytes mark tiff by code. And next 4 bytes give offset for metadata about tif image.
Use random access file to open image tif file, then seek the offset and you land into metadata space.From here you can pick offset of required pixel. Then go and get it..
We have tiff for this purpose only. That is to access individual pixels. If we needed full load of image then jpeg or BMP was enough.
check this link for full code:- full example to decode tiff image
package com.tif;
import android.os.*;import android.content.*;import android.app.*;import android.widget.*;import android.view.*;
import android.view.View.*;import android.graphics.*;import java.io.*;import java.util.*;import android.util.*;
import java.lang.*;import java.nio.*;import java.nio.channels.*;
public class Main extends Activity
{
private static final int CLEAR_CODE = 256;
private static final int EOI_CODE = 257;
long bytesCount=0L;
ScrollView sv;TextView tv;ImageView iv;
List intList;
long[] stripOffs,stripBytes;
byte[] bytes,ubytes,bmpBytes;
ByteBuffer bBuff;
BitBuffer bitBuff;
int entries,type,tag;
long count,value,ifd,stripAt,stripCount,stripBytesAt,rows,cols;
String txt="Null",path="",decompressed="";
String[] info= {"width","length","bitsPerSample","Compression","PhotometricInterpretation","FillOrder","StripOffsets","SamplesPerPixel","RowsPerStrip"
,"StripBytes","XResolution","YResolution","PlanarConfig","ResolutionUnit","extra","NextIFD"};
Bitmap bmp=null;
class DotsView extends View
{
int i = 0;Bitmap bmp;Canvas cnv;Rect bounds;Paint p;int width,height;
int alfa,red,green,blue;
public DotsView(Context context ,int width ,int height)
{
super(context);
this.width = width;
this.height = height;
bmp = Bitmap.createBitmap(width,height,Bitmap.Config.ARGB_8888);
cnv = new Canvas(bmp);
bounds = new Rect(0 , 0, width,height);
p = new Paint();
}
#Override
protected void onDraw(Canvas c)
{
for(int i=0;i<width;i++)
for(int j=0;j<height;j++)
{
for(int pix=0;pix<3;pix++)
{
if(pix==0)blue=bmpBytes[i+j+pix];
if(pix==1)green=bmpBytes[i+j+pix];
if(pix==2)red=bmpBytes[i+j+pix];
}
p.setColor( Color.argb(255, red,green,blue) );
cnv.drawPoint(i,j, p);
}
c.drawBitmap(bmp, null, bounds , null);
invalidate();
}
}
public int myShort(short sh)
{ int i;
ByteBuffer shortBuff=ByteBuffer.allocate(4);
shortBuff.order(ByteOrder.BIG_ENDIAN);shortBuff.putShort(sh);shortBuff.rewind();
shortBuff.order(ByteOrder.LITTLE_ENDIAN);sh=shortBuff.getShort();
if(sh<0)i=(int)(sh+32768); else i=(int)sh;
return i;
}
public long myInt(int i)
{ long l=0L;
ByteBuffer intBuff=ByteBuffer.allocate(4);
intBuff.order(ByteOrder.BIG_ENDIAN);intBuff.putInt(i);intBuff.rewind();
intBuff.order(ByteOrder.LITTLE_ENDIAN); i=intBuff.getInt();
if(i<0)l=(long)(i+2147483648L); else l=(long)i;
return l;
}
public String tagInfo(int tag)
{ int i=0;
switch(tag)
{case 256: i=0;break;case 257: i=1;break;case 258: i=2;break;case 259: i=3;break;case 262: i=4;break;case 266: i=5;break;
case 273: i=6;break;case 277: i=7;break;case 278: i=8;break;case 279: i=9;break;case 282: i=10;break;case 283: i=11;break;
case 284: i=12;break;case 296: i=13;break;case 1496: i=14;break;case 0: i=15;break;
}
return info[i];
}
public void extractTif()
{
String taginfo="";String strVal="";
FileInputStream fis;BufferedInputStream bis;DataInputStream dis;
path=Environment.getExternalStorageDirectory().getPath();
path=path+"/DCIM"+"/kpd.tif";
try {
fis=new FileInputStream(path);bis=new BufferedInputStream(fis);dis=new DataInputStream(bis);
dis.skip(4);ifd=myInt(dis.readInt());
txt="TIFF-IFD: "; txt=txt+ifd;
dis.skip(ifd-8); entries=myShort(dis.readShort());
txt=txt+"\nNo.OfEntries="+entries;
for(int i=0;i<=entries;i++)
{ tag=myShort( dis.readShort() );taginfo=tagInfo(tag);
type=myShort( dis.readShort() );count=myInt( dis.readInt() );value=myInt( dis.readInt() );
if(type==3)strVal="Value="; else strVal="Offset=";
if( strVal.equals("Offset=") )
{
if( taginfo.equals("StripOffsets") ){stripAt=value;stripCount=count;}
if( taginfo.equals("StripBytes") ){stripBytesAt=value;}
}
if( taginfo.equals("width") ){cols=value;}
if( taginfo.equals("length") ){rows=value;}
txt=txt+"\ntag="+tag+" "+tagInfo(tag)+",type="+type+",count="+count+strVal+value;
}
dis.close();bis.close();fis.close();
}catch(Exception e) {txt=txt+"\nerror="+e.toString();}
txt=txt+"\nNo.OfStrips="+stripCount+",array of strip locations at: "+stripAt+" and array of bytesPerStrip at "+stripBytesAt ;
extractBMP();
}
public void extractBMP()
{try{ File f=new File(path);RandomAccessFile raf=new RandomAccessFile(f,"r");
raf.seek(stripAt);stripOffs=new long[(int)stripCount];
txt=txt+"\nArray Of Image Offsets=";
for(int i=0;i<stripCount;i++){stripOffs[i]=myInt( raf.readInt() ); txt=txt+","+stripOffs[i]; }
raf.seek(stripBytesAt); stripBytes=new long[(int)stripCount];
txt=txt+"\nArray Of Strip Bytes =";
for(int i=0;i<stripCount;i++){stripBytes[i]=myInt(raf.readInt()); txt=txt+","+stripBytes[i];bytesCount+=stripBytes[i];}
txt=txt+stripBytes;
bBuff =ByteBuffer.allocate((int)(rows*cols*3));
for(int i=0;i<stripCount;i++)
{
bytes =new byte[(int)stripBytes[i]];
raf.seek(stripOffs[i]);
raf.read(bytes);
bBuff.put(lzwUncompress(bytes));
bytes=null;
}
txt=txt+"\nBuffered Image Bytes Size="+bBuff.position();
bBuff.rewind();
bmpBytes=new byte[bBuff.remaining()];
bmpBytes=bBuff.array();
txt=txt+"\nCount of bmpBytes="+bmpBytes.length;
bmp=BitmapFactory.decodeByteArray(bmpBytes,0,bmpBytes.length);
SystemClock.sleep(5000);
txt=txt+"Bitmap Object, bmp="+bmp;
if(bmp!=null){iv.setImageBitmap(bmp);sv.addView(iv);}
raf.close();
}catch(Exception e){txt=txt+"\nerror="+e.toString();}
}
public void lzw()
{
//String[] table=new String[4096];
byte b;char ch;String s;String pre="";short sh;
//List strTable=Arrays.asList(table);
//for(int i=0;i<255;i++)table[i]=Character.toString((char)i);
for(int i=0;i<100;i++)
{
b=bytes[i];
if(b<0)sh=(short)(128+b);
else sh=(short)b;
//ch=(char)b;
s=String.valueOf(sh);
//s=s+pre;
//if(strTable.contains(s)){pre=s;}
//else{ }
txt=txt+"Byte No."+i+"="+s+" ";
}
}
public void onCreate(Bundle bnd)
{
super.onCreate(bnd);
extractTif();
//sv=new ScrollView(this);
//tv=new TextView(this);
//iv=new ImageView(this);
//tv.setTextSize(7);
//sv.addView(tv);
//sv.addView(iv);
//tv.setText(txt);
//setContentView(sv);
Point res=new Point(); getWindowManager().getDefaultDisplay().getSize(res);
DotsView myView = new DotsView(this,res.x,res.y);
setContentView(myView);
}

Android: Really bad image quality when saving bitmap to sdcard

I am making an OCR app for Android, that will take a screenshot of some text, recognise it and search a key word on Google. If you haven't already realized, I'm trying to make a "Google Now on Tap" clone.
To make the OCR work better, I am first rotating the image, then filtering the image. First by getting rid of the status bar and the navigation bar, then converting it to grayscale, then sharpening.
But the image quality after filtering the image is extremely pixelated, and this greatly effects OCR accuracy.
Here are the images, before and after (just of an IFTTT email I got)
As you can see, the before image is much higher quality than the filtered and rotated one.
Here is my code for rotating, filtering and saving the image:
Firstly taking screenshot, then saving the screenshot.
public void getScreenshot()
{
try
{
Process sh = Runtime.getRuntime().exec("su", null, null);
OutputStream os = sh.getOutputStream();
os.write(("/system/bin/screencap -p " + _path).getBytes("ASCII"));
os.flush();
os.close();
sh.waitFor();
onPhotoTaken();
Toast.makeText(this, "Screenshot taken", Toast.LENGTH_SHORT).show();
}
catch (IOException e)
{
System.out.println("IOException");
}
catch (InterruptedException e)
{
System.out.println("InterruptedException");
}
}
Then, rotate the image:
protected void onPhotoTaken() {
_taken = true;
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 4;
Bitmap bitmap = BitmapFactory.decodeFile(_path, options);
try {
ExifInterface exif = new ExifInterface(_path);
int exifOrientation = exif.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
Log.v(TAG, "Orient: " + exifOrientation);
int rotate = 0;
switch (exifOrientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
rotate = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
rotate = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
rotate = 270;
break;
}
Log.v(TAG, "Rotation: " + rotate);
if (rotate != 0) {
// Getting width & height of the given image.
int w = bitmap.getWidth();
int h = bitmap.getHeight();
// Setting pre rotate
Matrix mtx = new Matrix();
mtx.preRotate(rotate);
// Rotating Bitmap
bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false);
}
// Convert to ARGB_8888, required by tess
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
} catch (IOException e) {
Log.e(TAG, "Couldn't correct orientation: " + e.toString());
}
// _image.setImageBitmap( bitmap );
setImageFilters(bitmap);
}
Then, filter the image:
public void setImageFilters(Bitmap bmpOriginal)
{
//Start by cropping image
Bitmap croppedBitmap = ThumbnailUtils.extractThumbnail(bmpOriginal, 1080, 1420);
//Then convert to grayscale
int width, height;
height = 1420;
width = 1080;
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
ColorMatrix cm = new ColorMatrix();
cm.setSaturation(0);
ColorMatrixColorFilter f = new ColorMatrixColorFilter(cm);
paint.setColorFilter(f);
c.drawBitmap(croppedBitmap, 0, 0, paint);
//Finally, sharpen the image
double weight = 11;
double[][] sharpConfig = new double[][]
{
{ 0 , -2 , 0 },
{ -2, weight, -2 },
{ 0 , -2 , 0 }
};
ConvolutionMatrix convMatrix = new ConvolutionMatrix(3);
convMatrix.applyConfig(sharpConfig);
convMatrix.Factor = weight - 8;
Bitmap filteredBitmap = ConvolutionMatrix.computeConvolution3x3(bmpGrayscale, convMatrix);
//Start Optical Character Recognition
startOCR(filteredBitmap);
//Save filtered image
saveFiltered(filteredBitmap);
}
Then, saving the filtered and rotated image:
public void saveFiltered(Bitmap filteredBmp) {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
filteredBmp.compress(Bitmap.CompressFormat.JPEG, 20, bytes);
//You can create a new file name "test.jpg" in sdcard folder.
File f = new File("/sdcard/SimpleAndroidOCR/ocrgray.jpg");
f.createNewFile();
//Write the bytes in file
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
//Remember close the FileOutput
fo.close();
} catch (Exception e) {
e.printStackTrace();
}
}
Thanks heaps for anyone taking the time to help.
It was actually in my onPhotoTaken method. After taking and saving the screenshot in get screenshot, I am reading the file from the location it was saved to, then filtering it. I changed this line in the onPhotoTaken method:
options.inSampleSize = 4 to options.inSampleSize = 1
It does look like the jpeg compression is messing the image up. Try using a format better suited for images with sharp edges, such as of text. I would recommend png or even gif. You could also store the uncompressed BMP.
Jpeg compression works by exploiting the fact that in most pictures (nature, people, objects), sharp edges are not that visible to the human eye. This makes it really bad for storing sharp edged content, such as text.
Also, your image filter is effectively removing the anti-aliasing of the image, which further decreases the perceived image quality. That might be what you want to do, however, since it might make OCR easier.
I also missed the sampling size due to the images you uploaded being the same size here on the site. From the Android documentation:
If set to a value > 1, requests the decoder to subsample the original
image, returning a smaller image to save memory. The sample size is
the number of pixels in either dimension that correspond to a single
pixel in the decoded bitmap. For example, inSampleSize == 4 returns an
image that is 1/4 the width/height of the original, and 1/16 the
number of pixels. Any value <= 1 is treated the same as 1. Note: the
decoder uses a final value based on powers of 2, any other value will
be rounded down to the nearest power of 2.
Setting options.inSampleSize = 4; to 1 instead will increase the quality.

Android: Display image from file in highest resolution

I have quite the annoying problem. I'm building an app where one can share photos. On the SurfaceView where you take the actual photo, the resolution is great. However, when I retrieve that image and display it in a ListView using Picasso, the resolution goes to crap. The pixelation is real. Is there anything that I'm doing horrendously wrong to cause this? The first code snippet below is where I actually save the photo, and the one below that is my getItemView() method in my adapter for the listview. Thanks in advance.
Note that the "photo" variable you see in my code is a Parse subclass I've created to make it easier working with data associated with each photo. I think you can safely ignore it.
EDIT:
SurfaceView of Camera:
Note that I attempt to set the camera parameters to the highest quality allowed. Unfortunately, when I LOG size.width and size.height, I can only get around 176x144. Is there a way to get a higher resolution for supported camera sizes itself?
camera.setDisplayOrientation(90);
Parameters parameters = camera.getParameters();
parameters.set("jpeg-quality", 70);
parameters.setPictureFormat(ImageFormat.JPEG);
List<Camera.Size> sizes = parameters.getSupportedPictureSizes();
Size size = sizes.get(Integer.valueOf((sizes.size()-1)));
parameters.setPictureSize(size.width, size.height);
camera.setParameters(parameters);
camera.setDisplayOrientation(90);
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
parameters.setPreviewSize(size2.width, size2.height);
camera.setPreviewDisplay(holder);
camera.startPreview();
Saving the photo:
// Freeze camera
camera.stopPreview();
// Resize photo
Bitmap mealImage = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap mealImageScaled = Bitmap.createScaledBitmap(mealImage, 640, 640, false);
// Override Android default landscape orientation and save portrait
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedScaledMealImage = Bitmap.createBitmap(mealImageScaled, 0,
0, mealImageScaled.getWidth(), mealImageScaled.getHeight(),
matrix, true);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
rotatedScaledMealImage.compress(Bitmap.CompressFormat.JPEG, 100, bos);
byte[] scaledData = bos.toByteArray();
// Save the scaled image to Parse with the date and time as its file name.
DateTime currentTime = new DateTime();
DateTimeFormatter fmt = DateTimeFormat.forPattern("HH MM SS");
photoFile = new ParseFile(currentTime.toString(fmt), scaledData);
photo.setPhotoFile(photoFile);
Displaying it:
final ParseImageView photoView = holder.photoView;
ParseFile photoFile = photo.getParseFile("photo");
Picasso.with(getContext())
.load(photoFile.getUrl())
.into(photoView, new Callback() {
#Override
public void onError() {
}
#Override
public void onSuccess() {
}
});
The problem is not with the Picasso
It because this line of code
parameters.set("jpeg-quality", 70);
and this
List<Size> sizes2 = parameters.getSupportedPreviewSizes();
Size size2 = sizes.get(0);
When you setup the camera you already turned down the quality to the 70% (because based on the Android Documentation the range of jpeq-quality is between 0-100)
And then you also need to check is the size of the camera is correct or not, because you are making assumption with that code
you can try this code to get the best preview size with your preffered width and height
private Camera.Size getBestPreviewSize(int width, int height, Camera.Parameters parameters){
Camera.Size bestSize = null;
List<Camera.Size> sizeList = parameters.getSupportedPreviewSizes();
bestSize = sizeList.get(0);
for(int i = 1; i < sizeList.size(); i++){
if((sizeList.get(i).width * sizeList.get(i).height) >
(bestSize.width * bestSize.height)){
bestSize = sizeList.get(i);
}
}
return bestSize;
}
I hope this answer will help you, if you have another question about my answer you can try to ask me in the comment :)

Change getBitmapFromURL

I have image in my ListView. They are loaded as follows:
String iMages[] = {
"http://www.thebiblescholar.com/android_awesome.jpg",
"http://blogs-images.forbes.com/rogerkay/files/2011/07/Android1.jpg",
"http://cdn.slashgear.com/wp-content/uploads/2012/10/android-market-leader-smartphone.jpg",
"http://www.planmyworkshop.com/images/android.jpeg",
"http://www.androidguys.com/wp-content/uploads/2012/07/01-android2.jpg"
};
ArrayList<Bitmap> bitmap_array = new ArrayList<Bitmap>();
for (int i = 0; i < iMages.length; i++) {
Log.d("i-->" + i, "Url-->" + iMages[i]);
Bitmap bit = getBitmapFromURL(iMages[i]);
bitmap_array.add(bit);
}
How load them from res/drawable ? I tried different ways, but all the way past ...
Try something like this to decode a bitmap from your drawable folder:
Bitmap bitmap= BitmapFactory.decodeResource(context.getResources(),
R.drawable.ic_launcher);
I assume the images in your ListView are of type ImageView or subclasses (ImageButton, ZoomButton etc.).
If that is the case, just set the res image as background:
myImageView.setBackgroundResource(R.drawable.my_image);
Remember to do it only from UI thread.
You can also do like this
String imageFileName = "launcher"; // this is image file name
String PACKAGE_NAME = getApplicationContext().getPackageName();
int imgId = getResources().getIdentifier(PACKAGE_NAME+":drawable/"+imageFileName , null, null);
image_view.setImageBitmap(BitmapFactory.decodeResource(getResources(),imgId));

How to get continuous frames from a video file input?

I had done with taking video as input. Now decided to get frames with it, so I used MediaMetadataRetriever(). After using this got only first frame. So can anybody suggest me the remedy?
String STR = (String) Environment.getExternalStorageDirectory().getAbsolutePath();
MediaMetadataRetriever retriever = new MediaMetadataRetriever();
retriever.setDataSource(STR+"/vd.3gpp");
img = (ImageView) findViewById(R.id.imageView1);
img.setImageBitmap(retriever.getFrameAtTime(1000,MediaMetadataRetriever.OPTION_NEXT_SYNC));
Thanks in advance.
At this point:
img.setImageBitmap(retriever.getFrameAtTime(1000,MediaMetadataRetriever.OPTION_NEXT_SYNC));
You are retrieving exactly one frame and setting exactly one bitmap.
What you need is a loop, like:
int videoLength = /* get video length from some where */
for(int i = 0; i < videoLength; i *= 1000000)
{
img.setImageBitmap(retriever.getFrameAtTime(1000, MediaMetadataRetriever.OPTION_NEXT_SYNC));
}

Categories