App crash when selecting more than 10 images - java

I am coding an android app using android studio - java , I am facing a problem when picking more than 10 images from gallery and inserting them into Sqlitedatabase table , when inserting the 10th image it takes much longer time than the last 9 images , and when selecting all the images (10) from the table the app crashes with errors
E/SQLiteCursor: onMove() return false. RequiredPos: 9 WindowStartPos: 9 WindowRowCount: 0(original count of query: 10)
Caused by: android.database.CursorIndexOutOfBoundsException: Index -1 requested, with a size of 10
Code
Picking and inserting images code
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode== 1 && resultCode == RESULT_OK) {
try {
ClipData clipData = data.getClipData();
if (clipData == null)
return;
for (int i = 0; i < clipData.getItemCount(); i++) {
Bitmap bitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), clipData.getItemAt(i).getUri());
byte[] myImg = getBitmapAsByteArray(bitmap);
DatabaseHelper databaseHelper = new DatabaseHelper(this);
SQLiteDatabase db = databaseHelper.getWritableDatabase();
ContentValues contentValues = new ContentValues();
contentValues.put("myImage",myImg);
db.insert("myTable",null,contentValues);
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
getBitmapAsByteArray()
private static byte[] getBitmapAsByteArray(Bitmap bitmap) {
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 0, outputStream);
return outputStream.toByteArray();
}
Selecting images from database
public void showImages(View view) {
DatabaseHelper databaseHelper = new DatabaseHelper(this);
SQLiteDatabase db = databaseHelper.getReadableDatabase();
Cursor cursor = db.rawQuery("select * from myTable",null);
cursor.moveToFirst();
Toast.makeText(this, ""+cursor.getCount(), Toast.LENGTH_SHORT).show();
ArrayList<Bitmap> bitmaps = new ArrayList<>();
if (cursor.getCount()>0)
while (!cursor.isAfterLast())
{
bitmaps.add(BitmapFactory.decodeByteArray(cursor.getBlob(0), 0, cursor.getBlob(0).length));
cursor.moveToNext();
}
ViewPagerAdapter adapter = new ViewPagerAdapter(this,bitmaps);
viewPager.setAdapter(adapter);
viewPager.setCurrentItem(0);
cursor.close();
}
any suggestions ?

Not sure about the code, but i would not store image directly in Database. Image can be really huge, it might take some time to write and read multiple images.
Instead, like what backend developers do, i would store image as File on Filesystem (app private directory). They will be deleted when app is deleted. Save the images as Files and store their path or name into DB. So whenever you need image, just do:
List<String> getImageNamesFromDb()
List<File> getImageFiles(List<String>)
processYourImageFiles(List<File>)

It is an index out of bound exception.
Your loop is going in negative , you are accessing a row at index -1 which does not exist.
Try errors in your loop.
Please give your code of java file.
It might help more for finding more errors.

Related

Blob Row too big too fit in Android

How can I limit the size of the image when saving to SQLite? I have this error when I retrieve which I think didn't get the big size of the blob Image from SQLite. However, I tried putting limit 500 in the query like this SELECT id,cash_card,hh_number,cc_image FROM CgList limit 500, but the result is the same, it crashes my application.
In short, Is there any way to reduce the file size of the image when inserting to the SQLite database?
Error
android.database.sqlite.SQLiteBlobTooBigException: Row too big to fit
into CursorWindow requiredPos=0, totalRows=4
Insert data and blob
public void insertData(String cash_card, String hh_number,String series_number ,byte[] cc_image,byte[] id_image){
SQLiteDatabase database = getWritableDatabase();
String sql = "INSERT INTO CgList VALUES (NULL,?, ?, ?, ?, ?)";
SQLiteStatement statement = database.compileStatement(sql);
statement.clearBindings();
statement.bindString(1, cash_card);
statement.bindString(2, hh_number);
statement.bindString(3, series_number);
statement.bindBlob(4, cc_image);
statement.bindBlob(5, id_image);
statement.executeInsert();
}
Getting data
Cursor cursor = ScannedDetails.sqLiteHelper.getData("SELECT id,cash_card,hh_number,cc_image FROM CgList");
list.clear();
while (cursor.moveToNext()) { // the error is here
int id = cursor.getInt(0);
String name = cursor.getString(1);
String price = cursor.getString(2);
byte[] image = cursor.getBlob(3);
list.add(new Inventory(name, price, image, id));
}
adapter.notifyDataSetChanged();
When I click button to save to SQLite
btnSubmit.setOnClickListener( new View.OnClickListener() {
#Override
public void onClick(View v) {
sqLiteHelper.insertData(
edtCashCard.getText().toString().trim(),
edtHhnumber.getText().toString().trim(),
edtSeriesno.getText().toString().trim(),
imageViewToByte(mPreviewCashcard),
imageViewToByte(mPreview4PsId)
);
}
});
Converting Image To Byte
public static byte[] imageViewToByte(ImageView image) {
Bitmap bitmap = ((BitmapDrawable)image.getDrawable()).getBitmap();
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] byteArray = stream.toByteArray();
return byteArray;
}
Updated
I think this is the problem when capturing an image it provide bigger size, I need this because after I capture the image I want to crop the image for some purpose but I want to display the actual capture not the cropped image to another activity
private void pickCamera() {
ContentValues values = new ContentValues();
values.put(MediaStore.Images.Media.TITLE, "NewPic");
values.put(MediaStore.Images.Media.DESCRIPTION, "Image to Text");
image_uri = getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI,values);
Intent cameraIntent = new Intent (MediaStore.ACTION_IMAGE_CAPTURE);
cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, image_uri);
startActivityForResult(cameraIntent, IMAGE_PICK_CAMERA_CODE);
}
OnActivityResult
#Override
protected void onActivityResult(int requestCode, int resultCode, #Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (resultCode == RESULT_OK){
if (requestCode == IMAGE_PICK_GALLER_CODE){
CropImage.activity(data.getData()).setGuidelines(CropImageView.Guidelines.ON).start(this);
}
if (requestCode == IMAGE_PICK_CAMERA_CODE){
CropImage.activity(image_uri).setGuidelines(CropImageView.Guidelines.ON).start(this);
}
}
if (requestCode == CropImage.CROP_IMAGE_ACTIVITY_REQUEST_CODE){
CropImage.ActivityResult result = CropImage.getActivityResult(data);
if(resultCode ==RESULT_OK){
Uri resultUri = result.getUri();
resultUri.getPath();
mPreviewIv.setImageURI(resultUri);
BitmapDrawable bitmapDrawable = (BitmapDrawable)mPreviewIv.getDrawable();
Bitmap bitmap = bitmapDrawable.getBitmap();
TextRecognizer recognizer = new TextRecognizer.Builder(getApplicationContext()).build();
if(!recognizer.isOperational()){
Toast.makeText(this,"Error",Toast.LENGTH_SHORT).show();
}
else{
Frame frame = new Frame.Builder().setBitmap(bitmap).build();
SparseArray<TextBlock> items = recognizer.detect(frame);
StringBuilder sb = new StringBuilder();
for (int i = 0; i<items.size(); i++){
TextBlock myItem = items.valueAt(i);
sb.append(myItem.getValue());
sb.append("\n");
}
Intent i = new Intent(MainActivity.this, ScannedDetails.class);
//camera
i.putExtra("CashCardImage",image_uri.toString()); //This data pass to another activity
startActivity(i);
}
}
}
}
Retrieve to another Activity
Bundle extras = getIntent().getExtras();
String resultUri = extras.getString("CashCardImage");
Uri myUri = Uri.parse(resultUri);
mPreviewCashCard.setImageURI(myUri);
The other one I tried when saving to SQLITE the size is not big it's just KIB, I think the problem is on the 1st PickCamera but those code need for cropping an Image
private void pickCamera() {
Intent intent = new Intent(ScannedDetails.this, InventoryList.class);
startActivity(intent);
}
However, I tried putting limit 500 in the query like this SELECT id,cash_card,hh_number,cc_image FROM CgList limit 500
LIMIT limits the number of rows returned, is does not limit the size of a row.
The issue is that a single row exceeds the capacity of the cache/buffer (CursorWindow) that a Cursor uses. This restriction does not apply when inserting the said row/rows.
The typical solution is to not store images but to instead store a reference, such as a file path, from which the image (or other large item) can then be retrieved.
Here's an example that stores images less than a set size (100k in the example) as a blob and those larger as files in data/data/myimages, which could be another solution.
It is possible to break a large image into chunks of data and store/retrieve that. Here's an example of doing that.
Don't store an image directly in your database. Instead, save the image to a file or to a webserver. Then in the database store the file path or URL to the image.

Adding an Image to a sqlite database

I've been struggling for a while now with this, and i want to be able to store an image to my database. I know storing an image to a database directly is bad practice but i simply do not know enough to be able to do it another way.
So, i'm currently stuck with a few issues.
Firstly, I'm not even sure my path is correct; i want to get a drawable file and store it to my database and there must be an easier way than doing a path straight from the C drive right?
Secondly, I don't know much about this but i need to convert my file to a bitmap so that it can be converted to a byte array? And i'm not sure how to do this exactly.
I've tried several things, wrote this code out about 10 times already in different ways and not getting anywhere. Thanks all for help in advance.
public void insertAvatar(String Email, byte[] head) {
SQLiteDatabase db = this.getWritableDatabase();
ContentValues contentValues = new ContentValues();
String sql = "INSERT INTO Avatar VALUES (?, ?)";
File head = new File("C:\\Users\\PC\\Desktop\\capaa\\src\\main\\res\\drawable\\head2.png");
Bitmap imageToStoreBitmap = head; // doesn't work as my file isnt a bitmap yet
objectByteArrayOutputStream = new ByteArrayOutputStream();
imageToStoreBitmap.compress(Bitmap.CompressFormat.JPEG, 100, objectByteArrayOutputStream);
imageInBytes = objectByteArrayOutputStream.toByteArray();
contentValues.put("Email", Email);
contentValues.put("head", imageInBytes);
long checkIfQueryRuns = db.insert("Avatar", null, contentValues );
}
You need to use Blob to store images in your SQLite database.
Create a table to store the images
CREATE TABLE " + DB_TABLE_NAME + "("+
KEY_NAME + " TEXT," +
KEY_IMAGE + " BLOB);";
To store an image in the table
public void addImage( String name, byte[] image) throws SQLiteException{
ContentValues cv = new ContentValues();
cv.put(KEY_NAME, name);
cv.put(KEY_IMAGE, image);
database.insert( DB_TABLE_NAME, null, cv );
}
As you can see before inserting the image to the table, you need to convert the bitmap to a byte array.
// To convert from bitmap to byte array
public static byte[] getImageBytes(Bitmap bitmap) {
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(CompressFormat.PNG, 0, stream);
return stream.toByteArray();
}
To retrieve image from database
//search your image using the key and get the cursor
............
byte[] image = cursor.getBlob(1);
............
As you can see the image returned is in a byte array. Now you can convert this byte array to bitmap to use in your app.
//To convert from byte array to bitmap
public static Bitmap getImage(byte[] image) {
return BitmapFactory.decodeByteArray(image, 0, image.length);
}
Having written the answer, I myself is not a big fan of storing images in the database. I am not sure what is your need for storing the images but you can check the following libraries to handle images in your app.
https://github.com/bumptech/glide/
https://square.github.io/picasso/
https://github.com/nostra13/Android-Universal-Image-Loader

Open CV Face Recognition not accurate

In my app I'm trying to do face recognition on a specific image using Open CV, here first I'm training one image and then after training that image if I run face recognition on that image it successfully recognizes that trained face. However, when I turn to another picture of the same person recognition does not work. It just works on the trained image, so my question is how do I rectify it?
Update:
What i want to do is that user should select image of a person from storage and then after training that selected image i want to fetch all images from storage which matches face of my trained image
Here is my activity class:
public class MainActivity extends AppCompatActivity {
private Mat rgba,gray;
private CascadeClassifier classifier;
private MatOfRect faces;
private ArrayList<Mat> images;
private ArrayList<String> imagesLabels;
private Storage local;
ImageView mimage;
Button prev,next;
ArrayList<Integer> imgs;
private int label[] = new int[1];
private double predict[] = new double[1];
Integer pos = 0;
private String[] uniqueLabels;
FaceRecognizer recognize;
private boolean trainfaces() {
if(images.isEmpty())
return false;
List<Mat> imagesMatrix = new ArrayList<>();
for (int i = 0; i < images.size(); i++)
imagesMatrix.add(images.get(i));
Set<String> uniqueLabelsSet = new HashSet<>(imagesLabels); // Get all unique labels
uniqueLabels = uniqueLabelsSet.toArray(new String[uniqueLabelsSet.size()]); // Convert to String array, so we can read the values from the indices
int[] classesNumbers = new int[uniqueLabels.length];
for (int i = 0; i < classesNumbers.length; i++)
classesNumbers[i] = i + 1; // Create incrementing list for each unique label starting at 1
int[] classes = new int[imagesLabels.size()];
for (int i = 0; i < imagesLabels.size(); i++) {
String label = imagesLabels.get(i);
for (int j = 0; j < uniqueLabels.length; j++) {
if (label.equals(uniqueLabels[j])) {
classes[i] = classesNumbers[j]; // Insert corresponding number
break;
}
}
}
Mat vectorClasses = new Mat(classes.length, 1, CvType.CV_32SC1); // CV_32S == int
vectorClasses.put(0, 0, classes); // Copy int array into a vector
recognize = LBPHFaceRecognizer.create(3,8,8,8,200);
recognize.train(imagesMatrix, vectorClasses);
if(SaveImage())
return true;
return false;
}
public void cropedImages(Mat mat) {
Rect rect_Crop=null;
for(Rect face: faces.toArray()) {
rect_Crop = new Rect(face.x, face.y, face.width, face.height);
}
Mat croped = new Mat(mat, rect_Crop);
images.add(croped);
}
public boolean SaveImage() {
File path = new File(Environment.getExternalStorageDirectory(), "TrainedData");
path.mkdirs();
String filename = "lbph_trained_data.xml";
File file = new File(path, filename);
recognize.save(file.toString());
if(file.exists())
return true;
return false;
}
private BaseLoaderCallback callbackLoader = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
switch(status) {
case BaseLoaderCallback.SUCCESS:
faces = new MatOfRect();
//reset
images = new ArrayList<Mat>();
imagesLabels = new ArrayList<String>();
local.putListMat("images", images);
local.putListString("imagesLabels", imagesLabels);
images = local.getListMat("images");
imagesLabels = local.getListString("imagesLabels");
break;
default:
super.onManagerConnected(status);
break;
}
}
};
#Override
protected void onResume() {
super.onResume();
if(OpenCVLoader.initDebug()) {
Log.i("hmm", "System Library Loaded Successfully");
callbackLoader.onManagerConnected(BaseLoaderCallback.SUCCESS);
} else {
Log.i("hmm", "Unable To Load System Library");
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, callbackLoader);
}
}
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
prev = findViewById(R.id.btprev);
next = findViewById(R.id.btnext);
mimage = findViewById(R.id.mimage);
local = new Storage(this);
imgs = new ArrayList();
imgs.add(R.drawable.jonc);
imgs.add(R.drawable.jonc2);
imgs.add(R.drawable.randy1);
imgs.add(R.drawable.randy2);
imgs.add(R.drawable.imgone);
imgs.add(R.drawable.imagetwo);
mimage.setBackgroundResource(imgs.get(pos));
prev.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
if(pos!=0){
pos--;
mimage.setBackgroundResource(imgs.get(pos));
}
}
});
next.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
if(pos<5){
pos++;
mimage.setBackgroundResource(imgs.get(pos));
}
}
});
Button train = (Button)findViewById(R.id.btn_train);
train.setOnClickListener(new View.OnClickListener() {
#RequiresApi(api = Build.VERSION_CODES.KITKAT)
#Override
public void onClick(View view) {
rgba = new Mat();
gray = new Mat();
Mat mGrayTmp = new Mat();
Mat mRgbaTmp = new Mat();
classifier = FileUtils.loadXMLS(MainActivity.this);
Bitmap icon = BitmapFactory.decodeResource(getResources(),
imgs.get(pos));
Bitmap bmp32 = icon.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp32, mGrayTmp);
Utils.bitmapToMat(bmp32, mRgbaTmp);
Imgproc.cvtColor(mGrayTmp, mGrayTmp, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(mRgbaTmp, mRgbaTmp, Imgproc.COLOR_BGRA2RGBA);
/*Core.transpose(mGrayTmp, mGrayTmp); // Rotate image
Core.flip(mGrayTmp, mGrayTmp, -1); // Flip along both*/
gray = mGrayTmp;
rgba = mRgbaTmp;
Imgproc.resize(gray, gray, new Size(200,200.0f/ ((float)gray.width()/ (float)gray.height())));
if(gray.total() == 0)
Toast.makeText(getApplicationContext(), "Can't Detect Faces", Toast.LENGTH_SHORT).show();
classifier.detectMultiScale(gray,faces,1.1,3,0|CASCADE_SCALE_IMAGE, new Size(30,30));
if(!faces.empty()) {
if(faces.toArray().length > 1)
Toast.makeText(getApplicationContext(), "Mutliple Faces Are not allowed", Toast.LENGTH_SHORT).show();
else {
if(gray.total() == 0) {
Log.i("hmm", "Empty gray image");
return;
}
cropedImages(gray);
imagesLabels.add("Baby");
Toast.makeText(getApplicationContext(), "Picture Set As Baby", Toast.LENGTH_LONG).show();
if (images != null && imagesLabels != null) {
local.putListMat("images", images);
local.putListString("imagesLabels", imagesLabels);
Log.i("hmm", "Images have been saved");
if(trainfaces()) {
images.clear();
imagesLabels.clear();
}
}
}
}else {
/* Bitmap bmp = null;
Mat tmp = new Mat(250, 250, CvType.CV_8U, new Scalar(4));
try {
//Imgproc.cvtColor(seedsImage, tmp, Imgproc.COLOR_RGB2BGRA);
Imgproc.cvtColor(gray, tmp, Imgproc.COLOR_GRAY2RGBA, 4);
bmp = Bitmap.createBitmap(tmp.cols(), tmp.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(tmp, bmp);
} catch (CvException e) {
Log.d("Exception", e.getMessage());
}*/
/* mimage.setImageBitmap(bmp);*/
Toast.makeText(getApplicationContext(), "Unknown Face", Toast.LENGTH_SHORT).show();
}
}
});
Button recognize = (Button)findViewById(R.id.btn_recognize);
recognize.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
if(loadData())
Log.i("hmm", "Trained data loaded successfully");
rgba = new Mat();
gray = new Mat();
faces = new MatOfRect();
Mat mGrayTmp = new Mat();
Mat mRgbaTmp = new Mat();
classifier = FileUtils.loadXMLS(MainActivity.this);
Bitmap icon = BitmapFactory.decodeResource(getResources(),
imgs.get(pos));
Bitmap bmp32 = icon.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp32, mGrayTmp);
Utils.bitmapToMat(bmp32, mRgbaTmp);
Imgproc.cvtColor(mGrayTmp, mGrayTmp, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(mRgbaTmp, mRgbaTmp, Imgproc.COLOR_BGRA2RGBA);
/*Core.transpose(mGrayTmp, mGrayTmp); // Rotate image
Core.flip(mGrayTmp, mGrayTmp, -1); // Flip along both*/
gray = mGrayTmp;
rgba = mRgbaTmp;
Imgproc.resize(gray, gray, new Size(200,200.0f/ ((float)gray.width()/ (float)gray.height())));
if(gray.total() == 0)
Toast.makeText(getApplicationContext(), "Can't Detect Faces", Toast.LENGTH_SHORT).show();
classifier.detectMultiScale(gray,faces,1.1,3,0|CASCADE_SCALE_IMAGE, new Size(30,30));
if(!faces.empty()) {
if(faces.toArray().length > 1)
Toast.makeText(getApplicationContext(), "Mutliple Faces Are not allowed", Toast.LENGTH_SHORT).show();
else {
if(gray.total() == 0) {
Log.i("hmm", "Empty gray image");
return;
}
recognizeImage(gray);
}
}else {
Toast.makeText(getApplicationContext(), "Unknown Face", Toast.LENGTH_SHORT).show();
}
}
});
}
private void recognizeImage(Mat mat) {
Rect rect_Crop=null;
for(Rect face: faces.toArray()) {
rect_Crop = new Rect(face.x, face.y, face.width, face.height);
}
Mat croped = new Mat(mat, rect_Crop);
recognize.predict(croped, label, predict);
int indice = (int)predict[0];
Log.i("hmmcheck:",String.valueOf(label[0])+" : "+String.valueOf(indice));
if(label[0] != -1 && indice < 125)
Toast.makeText(getApplicationContext(), "Welcome "+uniqueLabels[label[0]-1]+"", Toast.LENGTH_SHORT).show();
else
Toast.makeText(getApplicationContext(), "You're not the right person", Toast.LENGTH_SHORT).show();
}
private boolean loadData() {
String filename = FileUtils.loadTrained();
if(filename.isEmpty())
return false;
else
{
recognize.read(filename);
return true;
}
}
}
My File Utils Class:
public class FileUtils {
private static String TAG = FileUtils.class.getSimpleName();
private static boolean loadFile(Context context, String cascadeName) {
InputStream inp = null;
OutputStream out = null;
boolean completed = false;
try {
inp = context.getResources().getAssets().open(cascadeName);
File outFile = new File(context.getCacheDir(), cascadeName);
out = new FileOutputStream(outFile);
byte[] buffer = new byte[4096];
int bytesread;
while((bytesread = inp.read(buffer)) != -1) {
out.write(buffer, 0, bytesread);
}
completed = true;
inp.close();
out.flush();
out.close();
} catch (IOException e) {
Log.i(TAG, "Unable to load cascade file" + e);
}
return completed;
}
public static CascadeClassifier loadXMLS(Activity activity) {
InputStream is = activity.getResources().openRawResource(R.raw.lbpcascade_frontalface);
File cascadeDir = activity.getDir("cascade", Context.MODE_PRIVATE);
File mCascadeFile = new File(cascadeDir, "lbpcascade_frontalface_improved.xml");
FileOutputStream os = null;
try {
os = new FileOutputStream(mCascadeFile);
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
is.close();
os.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return new CascadeClassifier(mCascadeFile.getAbsolutePath());
}
public static String loadTrained() {
File file = new File(Environment.getExternalStorageDirectory(), "TrainedData/lbph_trained_data.xml");
return file.toString();
}
}
These are the images i'm trying to compare here face of person is same still in recognition it's not matching!
Update
According to the new edit in the question, you need a way to identify new people on the fly whose photos might not have been available during the training phase of the model. These tasks are called few shot learning. This is similar to the requirements of the intelligence/police agencies to find their targets using CCTV camera footage. As usually there are not enough images of a specific target, during training, they use models such as FaceNet. I really suggest reading the paper, however, I explain a few of its highlights here:
Generally, the last layer of a classifier is a n*1 vector with n-1 of
the elements almost equal to zero, and one close to 1. The element close to 1, determines the prediction of the classifier about the input's label.
The authors figured out that if they train a
classifier network with a specific loss function on a huge dataset of faces, you can use the semi-final layer output as a representation of any face, irrespective of it being in the training set or not, the authors call this vector Face Embedding.
The previous result means that with a very well trained FaceNet model, you can summarise any face into a vector. The very interesting attribute of this approach is that the vectors of a specific person's face in different angles/positions/states have are proximate in the euclidian space (this property is enforced by the loss function that the authors chose).
In summary, you have a model that gets faces as input and returns vectors. The vectors close to each other are very likely to belong to the same person (For checking that you can use KNN or just simple euclidian distance).
One implementation of FaceNet can be found here. I suggest you try to run it on your computer to get to know what you are actually dealing with. After that, it might be best to do the following:
Transform the FaceNet model mentioned in the repository to its
tflite version (this blogpost might help)
For each photo submitted by the user, use Face API to extract the face(s)
Use the minified model in your app to get the face embeddings of the extracted face.
Process all the images in the gallery of the user, getting the vectors for the faces in the photos.
Then compare each vector found in step4 with each vector found in step3 to get the matches.
Original Answer
You came across one of the most prevalent challenges of machine learning: Overfitting. Face detection and recognition is a huge area of research on its own and almost all the reasonably accurate models are using some kind of deep learning. Note that even detecting a face accurately is not as easy as it seems, however, as you are doing it on android, you can use Face API for this task. (Other more advanced techniques such as MTCNN are too slow/difficult to deploy on a handset). It has been shown that just feeding the model with a face photo with a lot of background noise or multiple people inside does not work. So, you really cannot skip this step.
After getting a nice trimmed face of the candidate targets from the background, you need to overcome the challenge of recognising the detected faces. Again, all the competent models to the best of my knowledge, are using some sort of deep learning/convolutional neural networks. Using them on a mobile phone is a challenge, but thanks to Tensorflow Lite you can minify them and run them within your app. A project about face recognition on android phones that I had worked on is here that you can check.
Keep in mind that any good model should be trained on numerous instances of labelled data, however there are a plethora of models already trained on large datasets of faces or other image recognition tasks, to tweak them and use their existing knowledge, we can employ transfer learning, for a quick start on object detection and transfer learning that is closely related to your case check this blog post.
Overall, you have to get numerous instances of the faces that you want to detect plus numerous face pics of people that you don't care about, then you need to train a model based on the above-mentioned resources, and then you need to use TensorFlow lite to decrease its size and embed it within your app. For each frame then, you call android Face API and feed (the probably detected face) into the model and identify the person.
Depending on your level of tolerance for delay and the number of training set size and number of targets, you can get various results, however, %90+ accuracy is easily achievable if you have only a few target people.
If I understand correctly, you're training the classifier with a single image. In that case, this one specific image is everything the classifier will be able to ever recognise. You would need a noticeably bigger training set of pictures showing the same person, something like 5 or 10 different images at the very least.
1) Change threshold value while initializing LBPHrecognizer to -> LBPHFaceRecognizer(1, 8, 8, 8, 100)
2) train each face with atleast 2-3 pictures since these recognizers mainly work on comparison
3) Set accuracy threshold while recognizing. Do something like this:
//predicting result
// LoadData is a static class that contains trained recognizer
// _result is the gray frame image captured by the camera
LBPHFaceRecognizer.PredictionResult ER = LoadData.recog.Predict(_result);
int temp_result = ER.Label;
imageBox1.SizeMode = PictureBoxSizeMode.StretchImage;
imageBox1.Image = _result.Mat;
//Displaying predicted result on screen
// LBPH returns -1 if face is recognized
if ((temp_result != -1) && (ER.Distance < 55)){
//I get best accuracy at 55, you should try different values to determine best results
// Do something with detected image
}

How to get image from gallery in a .jpg file format?

I am trying to get image from gallery. It is giving me image as bitmap. I want the image in .jpg file so that I can save file name in my database.
I have followed this tutorial :
http://www.theappguruz.com/blog/android-take-photo-camera-gallery-code-sample
gallery image selected code:
#SuppressWarnings("deprecation")
private void onSelectFromGalleryResult(Intent data) {
Bitmap bm=null;
if (data != null) {
try {
bm = MediaStore.Images.Media.getBitmap(getApplicationContext().getContentResolver(), data.getData());
} catch (IOException e) {
e.printStackTrace();
}
}
Uri selectedImage = data.getData();
String[] filePath = {MediaStore.Images.Media.DATA};
Cursor c = getContentResolver().query(selectedImage, filePath, null, null, null);
c.moveToFirst();
int columnIndex = c.getColumnIndex(filePath[0]);
String picturePath = c.getString(columnIndex);
c.close();
File file = new File(picturePath);// error line
mProfileImage = file;
profile_image.setImageBitmap(bm);
}
I tried this. But I am getting null pointer on file.
Exception :
Caused by: java.lang.NullPointerException: Attempt to invoke virtual method 'char[] java.lang.String.toCharArray()' on a null object reference
Also I don't want this newly created file to be saved in external storage. This should be a temporary file. How can I do this?
Thank you..
The good news is you're a lot closer to done than you think!
Bitmap bm=null;
if (data != null) {
try {
bm = MediaStore.Images.Media.getBitmap(getApplicationContext().getContentResolver(), data.getData());
} catch (IOException e) {
e.printStackTrace();
}
}
At this point, if bm != null, you have a Bitmap object. Bitmap is Android's generic image object that's ready to go. It's actually probably in .jpg format already, so you just have to write it to a file. you want to write it to a temporary file, so I'd do something like this:
File outputDir = context.getCacheDir(); // Activity context
File outputFile = File.createTempFile("prefix", "extension", outputDir); // follow the API for createTempFile
Regardless, at this point it's pretty easy to write a Bitmap to a file.
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 100, stream); //replace 100 with desired quality percentage.
byte[] byteArray = stream.toByteArray();
Now you have a byte array. I'll leave writing that to a file to you.
If you want the temporary file to go away, see here for more info: https://developer.android.com/reference/java/io/File.html#deleteOnExit()
Bitmap bm=null;
if (data != null) {
try {
bm = MediaStore.Images.Media.getBitmap(getApplicationContext().getContentResolver(), data.getData());
} catch (IOException e) {
e.printStackTrace();
}
}
if (bm != null) { // sanity check
File outputDir = context.getCacheDir(); // Activity context
File outputFile = File.createTempFile("image", "jpg", outputDir); // follow the API for createTempFile
FileOutputStream stream = new FileOutputStream (outputFile, false); // Add false here so we don't append an image to another image. That would be weird.
// This line actually writes a bitmap to the stream. If you use a ByteArrayOutputStream, you end up with a byte array. If you use a FileOutputStream, you end up with a file.
bm.compress(Bitmap.CompressFormat.JPEG, 100, stream);
stream.close(); // cleanup
}
I hope that helps!
Looks like your picturePath is null. That is why you cannot convert the image. Try adding this code fragment to get the path of the selected image:
private String getRealPathFromURI(Uri uri) {
String[] projection = { MediaStore.Images.Media.DATA };
#SuppressWarnings("deprecation")
Cursor cursor = managedQuery(uri, projection, null, null, null);
int column_index = cursor
.getColumnIndexOrThrow(MediaStore.Images.Media.DATA);
cursor.moveToFirst();
return cursor.getString(column_index);
}
After that, you need to modify your onSelectFromGalleryResult. Remove/disable line String[] filePath = {MediaStore.Images.Media.DATA}; and so on and replace with below.
Uri selectedImageUri = Uri.parse(selectedImage);
String photoPath = getRealPathFromURI(selectedImageUri);
mProfileImage = new File(photoPath);
//check if you get something like this - file:///mnt/sdcard/yourselectedimage.png
Log.i("FilePath", mProfileImage.getAbsolutePath)
if(mProfileImage.isExist()){
//Check if the file is exist.
//Do something here (display the image using imageView/ convert the image into string)
}
Question: What is the reason you need to convert it in .jpg format? Can it be .gif, .png etc?

how to check image size less then 100kb android

I am trying to get image from gallery and setting up it on ImageView , Hear is okay well i get and set image on ImageView, but now i want to check image size of selected image in kb so i set the validaion for image uploading.
Please anyone can suggest me how to check selected image size less then 100kb or not?,
Hear is my code for image selecting and setting it.
Choosing Image useing Intent
Intent iv = new Intent(
Intent.ACTION_PICK,
android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI);
startActivityForResult(iv, RESULT_LOAD_IMAGE);
and get Image Result code ..
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) {
Uri selectedImage = data.getData();
String[] filePathColumn = { MediaStore.Images.Media.DATA };
Cursor cursor = getContentResolver().query(selectedImage,
filePathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
picturePath = cursor.getString(columnIndex);
cursor.close();
Bitmap bmp=BitmapFactory.decodeFile(picturePath);
ivLogo.setImageBitmap(bmp);
uploadNewPic();
}
}
to know the size is less then 100kb. you should know the image size to compare. there is some method to know the size of bitmap
method 1
Bitmap bitmapOrg = BitmapFactory.decodeResource(getResources(),
R.drawable.ic_launcher);
Bitmap bitmap = bitmapOrg;
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] imageInByte = stream.toByteArray();
long lengthbmp = imageInByte.length;
method 2
File file = new File("/sdcard/Your_file");
long length = file.length() / 1024; // Size in KB
For more Study
go for http://developer.android.com/reference/android/graphics/Bitmap.html#getByteCount%28%29
Get file size as
File img = new File(picturePath);
int length = img.length();
it will return size in bytes. you can convert byte into kb
ArrayList<String> filePaths = new ArrayList<>();
ArrayList<String> newFilePath = new ArrayList<>(); //for storing file path which size is less than 100 KB
if (imagePaths != null) {
filePaths.addAll(imagePaths);
for (int i = 0; i < filePaths.size(); i++) {
File file = new File(filePaths.get(i));
int file_size = Integer.parseInt(String.valueOf(file.length() / 1024)); //calculate size of image in KB
if (file_size < 100)
newFilePath.add(filePaths.get(i)); //if file size less than 100 KB then add to newFilePath ArrayList
}
}
Here imagePaths stores path of all images that we have selected. Then if imagePaths is not null then add all images path in filePaths. You can use this code for document type of file also.
Just input URI from the intent and get the size of any file
uri = data.getData();
Cursor returnCursor = getContentResolver().query(uri, null, null, null, null);
int nameIndex = returnCursor.getColumnIndex(OpenableColumns.DISPLAY_NAME);
int sizeIndex = returnCursor.getColumnIndex(OpenableColumns.SIZE);
returnCursor.moveToFirst();
Log.e("TAG", "Name:" + returnCursor.getString(nameIndex));
Log.e("TAG","Size: "+Long.toString(returnCursor.getLong(sizeIndex)));
It will give size in bytes, So 100kb will be 100000bytes. I think this will help you.

Categories