I'm trying to capture an image from the JavaCameraView and load the captured image into another activity and is supposed to be processed (Hough Circles).
private void takePhoto(final Mat rgba) {
// Determine the path and metadata for the photo.
final long currentTimeMillis = System.currentTimeMillis();
final String appName = getString(R.string.app_name);
final String galleryPath =
Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES).toString();
final String albumPath = galleryPath + File.separator +
appName;
final String photoPath = albumPath + File.separator +
currentTimeMillis + LabActivity.PHOTO_FILE_EXTENSION;
final ContentValues values = new ContentValues();
values.put(MediaStore.MediaColumns.DATA, photoPath);
values.put(Images.Media.MIME_TYPE,
LabActivity.PHOTO_MIME_TYPE);
values.put(Images.Media.TITLE, appName);
values.put(Images.Media.DESCRIPTION, appName);
values.put(Images.Media.DATE_TAKEN, currentTimeMillis);
// Ensure that the album directory exists.
File album = new File(albumPath);
if (!album.isDirectory() && !album.mkdirs()) {
Log.e(TAG, "Failed to create album directory at " +
albumPath);
onTakePhotoFailed();
return;
}
/*
// Try to create the photo.
Imgproc.cvtColor(rgba, mBgr, Imgproc.COLOR_RGBA2BGR, 3);
if (!Imgcodecs.imwrite(photoPath, mBgr)) {
Log.e(TAG, "Failed to save photo to " + photoPath);
onTakePhotoFailed();
}
Log.d(TAG, "Photo saved successfully to " + photoPath);
*/
Mat grayMat = new Mat();
Mat cannyEdges = new Mat();
Mat lines = new Mat();
Imgproc.cvtColor(rgba, mBgr, Imgproc.COLOR_RGBA2BGR, 3);
//Converting the image to grayscale
Imgproc.cvtColor(mBgr, grayMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.Canny(grayMat, cannyEdges, 10, 100);
Imgproc.HoughLinesP(cannyEdges, lines, 1, Math.PI / 180, 50, 20, 20);
Mat houghLines = new Mat();
houghLines.create(cannyEdges.rows(), cannyEdges.cols(), CvType.CV_8UC1);
//Drawing lines on the image
for (int i = 0; i < lines.cols(); i++) {
double[] points = lines.get(0, i);
double x1, y1, x2, y2;
x1 = points[0];
y1 = points[1];
x2 = points[2];
y2 = points[3];
Point pt1 = new Point(x1, y1);
Point pt2 = new Point(x2, y2);
//Drawing lines on an image
Imgproc.line(houghLines, pt1, pt2, new Scalar(255, 0, 0), 1);
}
//Converting Mat back to Bitmap
Utils.matToBitmap(houghLines, currentBitmap);
Log.d(TAG, "Photo saved successfully to " + photoPath);
// Try to insert the photo into the MediaStore.
Uri uri;
try {
uri = getContentResolver().insert(
Images.Media.EXTERNAL_CONTENT_URI, values);
} catch (final Exception e) {
Log.e(TAG, "Failed to insert photo into MediaStore");
e.printStackTrace();
// Since the insertion failed, delete the photo.
File photo = new File(photoPath);
if (!photo.delete()) {
Log.e(TAG, "Failed to delete non-inserted photo");
}
onTakePhotoFailed();
return;
}
// Open the photo in LabActivity.
final Intent intent = new Intent(this, LabActivity.class);
intent.putExtra(LabActivity.EXTRA_PHOTO_URI, uri);
intent.putExtra(LabActivity.EXTRA_PHOTO_DATA_PATH,
photoPath);
runOnUiThread(new Runnable() {
#Override
public void run() {
startActivity(intent);
}
});
}
The error occurs after i click the capture option.
12-07 00:15:45.420 9205-9933/? E/AndroidRuntime﹕ FATAL EXCEPTION: Thread-8672
Process: com.example.alexies.cameratesting, PID: 9205
java.lang.IllegalArgumentException: bmp == null
at org.opencv.android.Utils.matToBitmap(Utils.java:122)
at org.opencv.android.Utils.matToBitmap(Utils.java:132)
at com.example.alexies.cameratesting.MainActivity.takePhoto(MainActivity.java:380)
currentBitmap is null in your code.
Either you didn't copy the part where it's assigned a bitmap value or it's never assigned. If there's some part of your code missing please add it in your question, if not, your problem is that you never get the bitmap.
EDIT
You never initiate currentBitmap. The docs state that the provided bitmap must be the same size as the Mat object (your houghLines) and the type of your bitmap should be ARGB_8888 or RGB_565.
Related
I need to be able to print out a bitmap QR Code using my Brother QL-720NW.
As of right now, I'm able to generate a QR code bitmap and display it properly in an ImageView. On a button press, the user needs to be able to print that QR code bitmap from the Brother label printer.
I am able to make a connection to the printer, but I can only print out blank labels that do not show the QR code. How can I fix this so that the bitmap appears on the printed label properly?
Method for printing bitmap:
void printImage(Bitmap bitmap) {
// Specify printer
final Printer printer = new Printer();
PrinterInfo settings = printer.getPrinterInfo();
settings.ipAddress = "192.168.2.149";
settings.workPath = "/storage/emulated/0/Download";
settings.printerModel = PrinterInfo.Model.QL_720NW;
settings.port = PrinterInfo.Port.NET;
settings.orientation = PrinterInfo.Orientation.LANDSCAPE;
//settings.paperSize = PrinterInfo.PaperSize.CUSTOM;
settings.align = PrinterInfo.Align.CENTER;
settings.valign = PrinterInfo.VAlign.MIDDLE;
settings.printMode = PrinterInfo.PrintMode.ORIGINAL;
settings.numberOfCopies = 1;
settings.labelNameIndex = LabelInfo.QL700.W62RB.ordinal();
settings.isAutoCut = true;
settings.isCutAtEnd = false;
printer.setPrinterInfo(settings);
// Connect, then print
new Thread(new Runnable() {
#Override
public void run() {
if (printer.startCommunication()) {
Log.e("Tag: ", "Connection made.");
PrinterStatus result = printer.printImage(bitmap);
Log.e("Tag: ", "Printing!");
if (result.errorCode != PrinterInfo.ErrorCode.ERROR_NONE) {
Log.d("TAG", "ERROR - " + result.errorCode);
}
printer.endCommunication();
}
else {
Log.e("Tag: ", "Cannot make a connection.");
}
}
}).start();
}
Generating bitmap:
Bitmap encodeAsBitmap(String str) throws WriterException {
QRCodeWriter writer = new QRCodeWriter();
BitMatrix bitMatrix = writer.encode(str, BarcodeFormat.QR_CODE, 100, 100);
int w = bitMatrix.getWidth();
int h = bitMatrix.getHeight();
int[] pixels = new int[w * h];
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
pixels[y * w + x] = bitMatrix.get(x, y) ? Color.BLACK : Color.WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, w, 0, 0, w, h);
return bitmap;
}
Solved it, I was using LabelInfo.QL700.W62RB.ordinal() for the LabelNameIndex when I should have been using LabelInfo.QL700.W62.ordinal().
Works perfectly now!
How to optimize dlib landmark detection?
Bitmap 160x120 was processed 7 second.
I want to 50 or 100ms.
My code:
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
ArrayList<android.graphics.Point> points = new ArrayList();
try {
long startTime = System.currentTimeMillis();
points = LandmarkDetection.getLandmark(matToBitmap(mRgba), this, landmarkPath);
long endTime = System.currentTimeMillis();
Log.i(TAG +"Time cost: ", String.valueOf((endTime - startTime) / 1000f) + " sec");
//drawPoint(points);
Log.i(TAG, "size = " + String.valueOf(points.size()));
}catch (Exception e) {
Log.i(TAG, "bitmap error! " + e.getMessage());
}
return mRgba;
}
private Bitmap matToBitmap(#NonNull Mat mat) {
Bitmap bmp;
try {
Mat resized = new Mat();
Imgproc.resize(mat, resized, new Size(160, 120));
bmp = Bitmap.createBitmap(resized.cols(), resized.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(resized, bmp);
}catch(Exception e) {
Log.e(TAG + ":matToBitmap", e.getMessage());
return null;
}
return bmp;
}
And LandmarkDetection class(This method takes all the time):
public static ArrayList<Point> getLandmark(#NonNull Bitmap bmp, Context context, String landmarkPath) {
mFaceDet = new FaceDet(landmarkPath);
Log.i(AndroidLauncher.TAG, String.valueOf(new File(context.getExternalCacheDir() + "/shape_predictor_68_face_landmarks.dat").exists()));
Log.i(AndroidLauncher.TAG, "Ширина: " + String.valueOf(bmp.getWidth()) + "\nВысота: " + String.valueOf(bmp.getHeight()));
results = mFaceDet.detect(bmp);
if (results != null) {
for (final VisionDetRet ret : results) {
landmarks = ret.getFaceLandmarks();
}
}
return landmarks;
}
What's wrong with my code?
A lot of things can optimize your code:
do not construct face_detector and shape_predictor for every face detection. it can take several seconds. you can use one shape_predictor for all your threads, but face_detector should be one per thread
mFaceDet code is unclear. may be you are resizing image there or doing other operations
http://dlib.net/faq.html#Whyisdlibslow
I'm writting app, which using KNearest. I wrote code to train model, but every restart app, I must train data again, so I would like to save train data to SharedPreferences once and using it after.
I know that I must convert Mat to byte[] and then to String, but decode is not working, I got error:
(layout == ROW_SAMPLE && responses.rows == nsamples)
|| (layout == COL_SAMPLE && responses.cols == nsamples)
in function void cv::ml::TrainDataImpl::setData(cv::InputArray,
int, cv::InputArray, cv::InputArray,
cv::InputArray, cv::InputArray, cv::InputArray, cv::InputArray)
Code:
protected Void doInBackground(Void... args) {
// Constants.TRAIN_SAMPLES = 10
Mat trainData = new Mat(0, 200 * 200, CvType.CV_32FC1); // 0 x 40 000
Mat trainClasses = new Mat(Constants.TRAIN_SAMPLES, 1, CvType.CV_32FC1); // 10 x 1
float[] myint = new float[Constants.TRAIN_SAMPLES + 1];
for (i = 1; i <= Constants.TRAIN_SAMPLES; i++)
myint[i] = (float) i;
trainClasses.put(0, 0, myint);
KNearest knn = KNearest.create();
String val = " ";
val = sharedPref.getString("key", " ");
// empty SharedPreferences
if (val.equals(" ")) {
// get all images from external storage
for (i = 1; i <= Constants.TRAIN_SAMPLES; i++) {
String photoPath = Environment.getExternalStorageDirectory().toString() + "/ramki/ramka_" + i + ".png";
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeFile(photoPath, options);
Utils.bitmapToMat(bitmap, img);
if (img.channels() == 3) {
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2GRAY);
} else if (img.channels() == 4) {
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGBA2GRAY);
}
Imgproc.resize(img, img, new Size(200, 200));
img.convertTo(img, CvType.CV_32FC1);
img = img.reshape(1, 1); // 1 x 40 000 ( 200x200 )
trainData.push_back(img);
publishProgress(i);
}
trainData.convertTo(trainData, CvType.CV_8U);
// save this trainData (Mat) to SharedPreferences
saveMatToPref(trainData);
} else {
// get trainData from SharedPreferences
val = sharedPref.getString("key", " ");
byte[] data = Base64.decode(val, Base64.DEFAULT);
trainData.convertTo(trainData, CvType.CV_8U);
trainData.put(0, 0, data);
}
trainData.convertTo(trainData, CvType.CV_32FC1);
knn.train(trainData, Ml.ROW_SAMPLE, trainClasses);
trainClasses.release();
trainData.release();
img.release();
onPostExecute();
return null;
}
public void saveMatToPref(Mat mat) {
if (mat.isContinuous()) {
int cols = mat.cols();
int rows = mat.rows();
byte[] data = new byte[cols * rows];
// there, data contains {0,0,0,0,0,0 ..... } 400 000 items
mat.get(0, 0, data);
String dataString = new String(Base64.encode(data, Base64.DEFAULT));
SharedPreferences.Editor mEdit1 = sharedPref.edit();
mEdit1.putString("key", dataString);
mEdit1.commit();
} else {
Log.i(TAG, "Mat not continuous.");
}
}
When I decode, my trainData look like this:
Mat [ 0*40000*CV_32FC1 ..]
but should: Mat [ 10*40000*CV_32FC1 ..]
Can anybody help me to encode and decode Mat? Thx for help.
As #Miki mention, problem was in types. Now it works, but only with Mat size around 200 x 40 000 in my case, if it's bigger, I have outOfMemory excepion...
String val = " ";
val = sharedPref.getString("key", " ");
// empty SharedPreferences
if ( ! val.equals(" ")) {
// get all images from external storage
for (i = 1; i <= Constants.TRAIN_SAMPLES; i++) {
String photoPath = Environment.getExternalStorageDirectory().toString() + "/ramki/ramka_" + i + ".png";
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeFile(photoPath, options);
Utils.bitmapToMat(bitmap, img);
if (img.channels() == 3) {
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2GRAY);
} else if (img.channels() == 4) {
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGBA2GRAY);
}
Imgproc.resize(img, img, new Size(200, 200));
img.convertTo(img, CvType.CV_32FC1);
img = img.reshape(1, 1); // 1 x 40 000 ( 200x200 )
trainData.push_back(img);
publishProgress(i);
}
// save this trainData (Mat) to SharedPreferences
saveMatToPref(trainData);
} else {
// get trainData from SharedPreferences
val = sharedPref.getString("key", " ");
byte[] data = Base64.decode(val, Base64.DEFAULT);
trainData = new Mat(Constants.TRAIN_SAMPLES, 200 * 200, CvType.CV_32FC1);
float[] f = toFloatArray(data);
trainData.put(0, 0, f);
}
knn.train(trainData, Ml.ROW_SAMPLE, trainClasses);
public void saveMatToPref(Mat mat) {
if (mat.isContinuous()) {
int size = (int)( mat.total() * mat.channels() );
float[] data = new float[ size ];
byte[] b = new byte[ size ];
mat.get(0, 0, data);
b = FloatArray2ByteArray(data);
String dataString = new String(Base64.encode(b, Base64.DEFAULT));
SharedPreferences.Editor mEdit1 = sharedPref.edit();
mEdit1.putString("key", dataString);
mEdit1.commit();
} else {
Log.i(TAG, "Mat not continuous.");
}
}
private static float[] toFloatArray(byte[] bytes) {
ByteBuffer buffer = ByteBuffer.wrap(bytes);
FloatBuffer fb = buffer.asFloatBuffer();
float[] floatArray = new float[fb.limit()];
fb.get(floatArray);
return floatArray;
}
public static byte[] FloatArray2ByteArray(float[] values){
ByteBuffer buffer = ByteBuffer.allocate(4 * values.length);
for (float value : values)
buffer.putFloat(value);
return buffer.array();
}
If someone have better solution, please add.
I'm using OpenCv with android studio to detect faces in an image along with the eyes and the mouth in each face. But, the problem is whenever I try to detect the mouth it returns multiple circles in a face which is wrong.
Here is the code I added for mouth detection:
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
switch (status) {
...
InputStream iserM = getResources().openRawResource(
R.raw.haarcascade_mcs_mouth);
File cascadeDirERM = getDir("cascadeERM",
Context.MODE_PRIVATE);
File cascadeFileERM = new File(cascadeDirERM,
"haarcascade_mcs_mouth.xml");
FileOutputStream oserM = new FileOutputStream(cascadeFileERM);
byte[] bufferERM = new byte[4096];
int bytesReadERM;
while ((bytesReadERM = iserM.read(bufferERM)) != -1) {
oserM.write(bufferERM, 0, bytesReadERM);
}
iserM.close();
oserM.close();
...
//here begins
mJavaDetectorMouth = new CascadeClassifier(
cascadeFileERM.getAbsolutePath());
if (mJavaDetectorMouth.empty()) {
Log.e(TAG, "Failed to load cascade classifier");
mJavaDetectorMouth = null;
} else
Log.i(TAG, "Loaded cascade classifier from "
+ mCascadeFile.getAbsolutePath());
//here ends
...
}
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
...
Rect r = facesArray[i];
MatOfRect mouths = new MatOfRect();
Mat faceROI = mRgba.submat(facesArray[i]);
mJavaDetectorMouth.detectMultiScale(faceROI, mouths,1.1,1,1, new org.opencv.core.Size(30, 30), new org.opencv.core.Size());
Rect[] mouthArray = mouths.toArray();
for (int j = 0; j < mouthArray.length; j++){
Point center1 = new Point(facesArray[i].x + mouthArray[j].x + mouthArray[j].width * 0.5,
facesArray[i].y + mouthArray[j].y + mouthArray[j].height * 0.5);
int radius = (int) Math.round(mouthArray[j].width / 2);
Imgproc.circle(mRgba, center1, radius, new Scalar(255, 0, 0), 4, 8, 0);
}
...
}
I've been looking around since I posted the question and tried a lot of things. But finally I found the solution:
I changed:
mJavaDetectorMouth.detectMultiScale(faceROI, mouths,1.1,1,1, new org.opencv.core.Size(30, 30), new org.opencv.core.Size());
To:
mJavaDetectorMouth.detectMultiScale(faceROI, mouths,1.1, 2,
Objdetect.CASCADE_FIND_BIGGEST_OBJECT
| Objdetect.CASCADE_SCALE_IMAGE, new org.opencv.core.Size(30, 30), new org.opencv.core.Size());
and I solved the issue.
How do i match multiple objects using a single template?
i want to match multiple objects by threshold value.
When i matched a single object, i used this code.
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
Mat img = Highgui.imread("/test/test_img.jpg");//input image
if(img.empty())
throw new Exception("no image");
Mat tpl = Highgui.imread("/test/test_tpl.jpg");//template image
if(tpl.empty())
throw new Exception("no template");
Mat result = new Mat();
Imgproc.matchTemplate(img, tpl,result,Imgproc.TM_CCOEFF_NORMED);//Template Matching
Core.MinMaxLocResult maxr = Core.minMaxLoc(result);
Point maxp = maxr.maxLoc;
Point maxop = new Point(maxp.x + tpl.width(), maxp.y + tpl.height());
Mat dst = img.clone();
Core.rectangle(dst, maxp, maxop, new Scalar(255,0,0), 2);//draw a rectangle
Highgui.imwrite("/test/test.jpg", dst);//save image
This one is working for me:
Mat img = Highgui.imread("/test/test_img.jpg");//input image
if(img.empty())
throw new Exception("no image");
Mat tpl = Highgui.imread("/test/test_tpl.jpg");//template image
if(tpl.empty())
throw new Exception("no template");
Mat result = new Mat();
Imgproc.matchTemplate(img, tpl,result,Imgproc.TM_CCOEFF_NORMED);//Template Matching
Imgproc.threshold(result, result, 0.1, 1, Imgproc.THRESH_TOZERO);
double threshold = 0.95;
double maxval;
Mat dst;
while(true)
{
Core.MinMaxLocResult maxr = Core.minMaxLoc(result);
Point maxp = maxr.maxLoc;
maxval = maxr.maxVal;
Point maxop = new Point(maxp.x + tpl.width(), maxp.y + tpl.height());
dst = img.clone();
if(maxval >= threshold)
{
System.out.println("Template Matches with input image");
Core.rectangle(img, maxp, new Point(maxp.x + tpl.cols(),
maxp.y + tpl.rows()), new Scalar(0, 255, 0),5);
Core.rectangle(result, maxp, new Point(maxp.x + tpl.cols(),
maxp.y + tpl.rows()), new Scalar(0, 255, 0),-1);
}else{
break;
}
}
Highgui.imwrite("test.jpg", dst);//save image
for example
the template:
coin
and the result:
marioWorld