How to use Sony SmartEyeGlass for Scanning QR Code? - java

I am trying to develop an app using Zbar library for SmartEyeGlass that scan QR codes. The app based on sample camera extension.But it doesn't work and I can't see what the problem is. Here is my code;
private void cameraEventOperation(CameraEvent event) {
if (event.getErrorStatus() != 0) {
Log.d(Constants.LOG_TAG, "error code = " + event.getErrorStatus());
return;
}
if(event.getIndex() != 0){
Log.d(Constants.LOG_TAG, "not oparate this event");
return;
}
Bitmap bitmap = null;
byte[] data = null;
if ((event.getData() != null) && ((event.getData().length) > 0)) {
data = event.getData();
bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
data1= data;
/* Instance barcode scanner */
scanner = new ImageScanner();
scanner.setConfig(Symbol.QRCODE, Config.X_DENSITY, 2);
scanner.setConfig(Symbol.QRCODE, Config.Y_DENSITY, 2);
Image barcode = new Image(width, height, "Y800");
barcode.setData(data1);
QRCodeStatus= scanner.scanImage(barcode);
if (QRCodeStatus != 0) {
SymbolSet syms = scanner.getResults();
for (Symbol kasa : syms) {
strValueOfScannedQR = String.valueOf(kasa.getData());
intValueOfScannedQR = Integer.valueOf(kasa.getData());
}
}
}
if (bitmap == null) {
Log.d(Constants.LOG_TAG, "bitmap == null");
return;
}
if (saveToSdcard == true) {
String fileName = saveFilePrefix + String.format("%04d", saveFileIndex) + ".jpg";
new SavePhotoTask(saveFolder,fileName).execute(data);
saveFileIndex++;
}
if (recordingMode == SmartEyeglassControl.Intents.CAMERA_MODE_STILL) {
Bitmap basebitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
basebitmap.setDensity(DisplayMetrics.DENSITY_DEFAULT);
Canvas canvas = new Canvas(basebitmap);
Rect rect = new Rect(0, 0, width, height);
Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);
canvas.drawBitmap(bitmap, rect, rect, paint);
utils.showBitmap(basebitmap);
return;
}
Log.d(Constants.LOG_TAG, "Camera frame was received : #" + saveFileIndex);
updateDisplay();
}
private void updateDisplay()
{
Bitmap displayBitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
displayBitmap.setDensity(DisplayMetrics.DENSITY_DEFAULT);
Canvas canvas = new Canvas(displayBitmap);
Paint paint = new Paint();
paint.setStyle(Paint.Style.FILL);
paint.setTextSize(16);
paint.setColor(Color.WHITE);
// Update layout according to the camera mode
switch (recordingMode) {
case SmartEyeglassControl.Intents.CAMERA_MODE_STILL:
canvas.drawText("Tap to capture : STILL", pointX, pointY, paint);
break;
case SmartEyeglassControl.Intents.CAMERA_MODE_STILL_TO_FILE:
canvas.drawText("Tap to capture : STILL TO FILE", pointX, pointY, paint);
break;
case SmartEyeglassControl.Intents.CAMERA_MODE_JPG_STREAM_HIGH_RATE:
if (cameraStarted) {
canvas.drawText("Frame Number: " + Integer.toString(saveFileIndex), pointBaseX, (pointY * 1), paint);
canvas.drawText("Value of QR: " + strValueOfScannedQR, pointBaseX, (pointY * 2), paint);
canvas.drawText("Data1=" + data1, pointBaseX, (pointY * 3), paint);
canvas.drawText("QR status " + QRCodeStatus, pointBaseX, (pointY * 4), paint);
}
else {
canvas.drawText("Tap to start JPEG Stream.", pointBaseX, pointY, paint);
}
break;
case SmartEyeglassControl.Intents.CAMERA_MODE_JPG_STREAM_LOW_RATE:
if (cameraStarted) {
canvas.drawText("JPEG Streaming...", pointBaseX, pointY, paint);
canvas.drawText("Tap to stop.", pointBaseX, (pointY * 2), paint);
canvas.drawText("Frame Number: " + Integer.toString(saveFileIndex), pointBaseX, (pointY * 3), paint);
} else {
canvas.drawText("Tap to start JPEG Stream.", pointBaseX, pointY, paint);
}
break;
default:
canvas.drawText("wrong recording type.", pointBaseX, pointY, paint);
}
utils.showBitmap(displayBitmap);
}
}

Problem seem to be that you are passing data1 as parameter to barcode.setData method.
You should probably pass the bitmap : barcode.setData(bitmap)
This question is rather related to QR code scanning library that you are using. Please also tag it with relevant tag for that library. So you can get support about that library. Please also check what is the requirement for the parameter expected in setData method in API references of your QR code scanning library.

Related

Printing out a bitmap QR code image with Brother Label Printer SDK prints out a blank label

I need to be able to print out a bitmap QR Code using my Brother QL-720NW.
As of right now, I'm able to generate a QR code bitmap and display it properly in an ImageView. On a button press, the user needs to be able to print that QR code bitmap from the Brother label printer.
I am able to make a connection to the printer, but I can only print out blank labels that do not show the QR code. How can I fix this so that the bitmap appears on the printed label properly?
Method for printing bitmap:
void printImage(Bitmap bitmap) {
// Specify printer
final Printer printer = new Printer();
PrinterInfo settings = printer.getPrinterInfo();
settings.ipAddress = "192.168.2.149";
settings.workPath = "/storage/emulated/0/Download";
settings.printerModel = PrinterInfo.Model.QL_720NW;
settings.port = PrinterInfo.Port.NET;
settings.orientation = PrinterInfo.Orientation.LANDSCAPE;
//settings.paperSize = PrinterInfo.PaperSize.CUSTOM;
settings.align = PrinterInfo.Align.CENTER;
settings.valign = PrinterInfo.VAlign.MIDDLE;
settings.printMode = PrinterInfo.PrintMode.ORIGINAL;
settings.numberOfCopies = 1;
settings.labelNameIndex = LabelInfo.QL700.W62RB.ordinal();
settings.isAutoCut = true;
settings.isCutAtEnd = false;
printer.setPrinterInfo(settings);
// Connect, then print
new Thread(new Runnable() {
#Override
public void run() {
if (printer.startCommunication()) {
Log.e("Tag: ", "Connection made.");
PrinterStatus result = printer.printImage(bitmap);
Log.e("Tag: ", "Printing!");
if (result.errorCode != PrinterInfo.ErrorCode.ERROR_NONE) {
Log.d("TAG", "ERROR - " + result.errorCode);
}
printer.endCommunication();
}
else {
Log.e("Tag: ", "Cannot make a connection.");
}
}
}).start();
}
Generating bitmap:
Bitmap encodeAsBitmap(String str) throws WriterException {
QRCodeWriter writer = new QRCodeWriter();
BitMatrix bitMatrix = writer.encode(str, BarcodeFormat.QR_CODE, 100, 100);
int w = bitMatrix.getWidth();
int h = bitMatrix.getHeight();
int[] pixels = new int[w * h];
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
pixels[y * w + x] = bitMatrix.get(x, y) ? Color.BLACK : Color.WHITE;
}
}
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixels, 0, w, 0, 0, w, h);
return bitmap;
}
Solved it, I was using LabelInfo.QL700.W62RB.ordinal() for the LabelNameIndex when I should have been using LabelInfo.QL700.W62.ordinal().
Works perfectly now!

Why is Google's ML Face Detection Kit crashing on .process()

I am creating a face detector app which will detect faces in real time and identify landmarks on faces. The landmarks for the faces are working perfectly fine, however my real time face detection isn't working at all.
I followed the instructions in Google's ML Kit(https://developers.google.com/ml-kit/vision/face-detection/android), but am really struggling to obtain the functionality in real time face detection.
In my debugger, the code crashes at facedetector.process(image).addOnSuccessListener() and instead goes into the onFailure()
This is my code for the realtime face detection part(I have commented some parts + reduced redundancy).
#Override
//process method to detect frame by frame in real time face detection
public void process(#NonNull Frame frame) {
int width = frame.getSize().getWidth();
int height = frame.getSize().getHeight();
InputImage image = InputImage.fromByteArray(
frame.getData(),
/* image width */width,
/* image height */height,
//if camera is facing front rotate image 90, else 270 degrees
(cameraFacing != Facing.FRONT) ? 90 : 270,
InputImage.IMAGE_FORMAT_YUV_420_888 // or IMAGE_FORMAT_YV12
);
FaceDetectorOptions faceDetectorOptions = new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
//setting contour mode to detect all facial contours in real time
.build();
FaceDetector faceDetector = FaceDetection.getClient(faceDetectorOptions);
faceDetector.process(image).addOnSuccessListener(new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(#NonNull List<Face> faces) {
imageView.setImageBitmap(null);
Bitmap bitmap = Bitmap.createBitmap(height, width, Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint dotPaint = new Paint();
dotPaint.setColor(Color.YELLOW);
dotPaint.setStyle(Paint.Style.FILL);
dotPaint.setStrokeWidth(6f);
Paint linePaint = new Paint();
linePaint.setColor(Color.GREEN);
linePaint.setStyle(Paint.Style.STROKE);
linePaint.setStrokeWidth(4f);
//looping through each face to detect each contour
for (Face face : faces) {
List<PointF> faceContours = face.getContour(
FaceContour.FACE
).getPoints();
for (int i = 0; i < faceContours.size(); i++) {
PointF faceContour = null;
if (i != (faceContours.size() - 1)) {
faceContour = faceContours.get(i);
canvas.drawLine(
faceContour.x, faceContour.y, faceContours.get(i + 1).x, faceContours.get(i + 1).y, linePaint
);
} else {//if at the last point, draw to the first point
canvas.drawLine(faceContour.x, faceContour.y, faceContours.get(0).x, faceContours.get(0).y, linePaint);
}
canvas.drawCircle(faceContour.x, faceContour.y, 4f, dotPaint);
}//end inner loop
List<PointF> leftEyebrowTopCountours = face.getContour(
FaceContour.LEFT_EYEBROW_TOP).getPoints();
for (int i = 0; i < leftEyebrowTopCountours.size(); i++) {
PointF leftEyebrowTopContour = leftEyebrowTopCountours.get(i);
if (i != (leftEyebrowTopCountours.size() - 1))
canvas.drawLine(leftEyebrowTopContour.x, leftEyebrowTopContour.y, leftEyebrowTopCountours.get(i + 1).x, leftEyebrowTopCountours.get(i + 1).y, linePaint);
canvas.drawCircle(leftEyebrowTopContour.x, leftEyebrowTopContour.y, 4f, dotPaint);
}
}
}
Side note: I am using Pixel 2 API 29 in my emulator. I left out the repetitive code since I am just going through contours
Full code for reference:
import com.google.android.gms.tasks.OnFailureListener;
import com.google.android.gms.tasks.OnSuccessListener;
import com.google.android.material.bottomsheet.BottomSheetBehavior;
import com.google.mlkit.vision.common.InputImage;
import com.google.mlkit.vision.face.Face;
import com.google.mlkit.vision.face.FaceContour;
import com.google.mlkit.vision.face.FaceDetection;
import com.google.mlkit.vision.face.FaceDetector;
import com.google.mlkit.vision.face.FaceDetectorOptions;
import com.google.mlkit.vision.face.FaceLandmark;
import com.otaliastudios.cameraview.CameraView;
import com.otaliastudios.cameraview.controls.Facing;
import com.otaliastudios.cameraview.frame.Frame;
import com.otaliastudios.cameraview.frame.FrameProcessor;
import com.theartofdev.edmodo.cropper.CropImage;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
public class MainActivity extends AppCompatActivity implements FrameProcessor {
private Facing cameraFacing = Facing.FRONT;
private ImageView imageView;
private CameraView faceDetectionCameraView;
private RecyclerView bottomSheetRecyclerView;
private BottomSheetBehavior bottomSheetBehavior;
private ArrayList<FaceDetectionModel> faceDetectionModels;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
faceDetectionModels = new ArrayList<>();
bottomSheetBehavior = BottomSheetBehavior.from(findViewById(R.id.bottom_sheet));
imageView = findViewById(R.id.face_detection_image_view);
faceDetectionCameraView = findViewById(R.id.face_detection_camera_view);
Button toggle = findViewById(R.id.face_detection_cam_toggle_button);
FrameLayout bottomSheetButton = findViewById(R.id.bottom_sheet_button);
bottomSheetRecyclerView = findViewById(R.id.bottom_sheet_recycler_view);
faceDetectionCameraView.setFacing(cameraFacing);
faceDetectionCameraView.setLifecycleOwner(MainActivity.this);
faceDetectionCameraView.addFrameProcessor(MainActivity.this);
toggle.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
cameraFacing = (cameraFacing == Facing.FRONT) ? Facing.BACK : Facing.FRONT;
faceDetectionCameraView.setFacing(cameraFacing);
}
});
bottomSheetButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
CropImage.activity().start(MainActivity.this);
}
});
bottomSheetRecyclerView.setLayoutManager(new LinearLayoutManager(MainActivity.this));
bottomSheetRecyclerView.setAdapter(new FaceDetectionAdapter(faceDetectionModels, MainActivity.this));
}
#Override
protected void onActivityResult(int requestCode, int resultCode, #Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if(requestCode == CropImage.CROP_IMAGE_ACTIVITY_REQUEST_CODE){
CropImage.ActivityResult result = CropImage.getActivityResult(data);
if(resultCode == RESULT_OK){
Uri imageUri = result.getUri();
try {
analyseImage(MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri));
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
private void analyseImage(Bitmap bitmap) {
if(bitmap == null){
Toast.makeText(this, "There was an error", Toast.LENGTH_SHORT).show();
}
//imageView.setImageBitmap(null);
faceDetectionModels.clear();
Objects.requireNonNull(bottomSheetRecyclerView.getAdapter()).notifyDataSetChanged();
bottomSheetBehavior.setState(BottomSheetBehavior.STATE_COLLAPSED);
showProgress();
InputImage firebaseInputImage = InputImage.fromBitmap(bitmap, 0);
FaceDetectorOptions options =
new FaceDetectorOptions.Builder()
.setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
.setLandmarkMode(FaceDetectorOptions.LANDMARK_MODE_ALL)
.setClassificationMode(FaceDetectorOptions.CLASSIFICATION_MODE_ALL)
.build();
FaceDetector faceDetector = FaceDetection.getClient(options);
faceDetector.process(firebaseInputImage)
.addOnSuccessListener(new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(#NonNull List<Face> faces) {
Bitmap mutableImage = bitmap.copy(Bitmap.Config.ARGB_8888, true);
detectFaces(faces, mutableImage);
imageView.setImageBitmap(mutableImage);
hideProgress();
bottomSheetRecyclerView.getAdapter().notifyDataSetChanged();
bottomSheetBehavior.setState(BottomSheetBehavior.STATE_EXPANDED);
}
})
.addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
Toast.makeText(MainActivity.this, "There was an error", Toast.LENGTH_SHORT).show();
hideProgress();
}
});
}
private void detectFaces(List<Face> faces, Bitmap bitmap) {
if(faces == null || bitmap == null) {
Toast.makeText(this, "There was an error", Toast.LENGTH_SHORT).show();
return;
}
Canvas canvas = new Canvas(bitmap);
Paint facePaint = new Paint();
facePaint.setColor(Color.GREEN);
facePaint.setStyle(Paint.Style.STROKE);
facePaint.setStrokeWidth(5f);
Paint faceTextPaint = new Paint();
faceTextPaint.setColor(Color.BLUE);
faceTextPaint.setTextSize(30f);
faceTextPaint.setTypeface(Typeface.SANS_SERIF);
Paint landmarkPaint = new Paint();
landmarkPaint.setColor(Color.YELLOW);
landmarkPaint.setStyle(Paint.Style.FILL);
landmarkPaint.setStrokeWidth(8f);
for(int i = 0; i < faces.size(); i++){
canvas.drawRect(faces.get(i).getBoundingBox(), facePaint);
canvas.drawText("Face" + i,
(faces.get(i).getBoundingBox().centerX()
-(faces.get(i).getBoundingBox().width() >> 1) + 8f),
(faces.get(i).getBoundingBox().centerY() + (faces.get(i).getBoundingBox().height() >> 1) - 8f), facePaint);
Face face = faces.get(i); //get one face
if(face.getLandmark(FaceLandmark.LEFT_EYE) != null){
FaceLandmark leftEye = face.getLandmark(FaceLandmark.LEFT_EYE);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(leftEye.getPosition().x, leftEye.getPosition().y, 8f, landmarkPaint);
}
if(face.getLandmark(FaceLandmark.RIGHT_EYE) != null){
FaceLandmark rightEye = face.getLandmark(FaceLandmark.RIGHT_EYE);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(rightEye.getPosition().x, rightEye.getPosition().y, 8f, landmarkPaint);
}
if(face.getLandmark(FaceLandmark.NOSE_BASE) != null){
FaceLandmark noseBase = face.getLandmark(FaceLandmark.NOSE_BASE);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(noseBase.getPosition().x, noseBase.getPosition().y, 8f, landmarkPaint);
}
if(face.getLandmark(FaceLandmark.LEFT_EAR) != null){
FaceLandmark leftEar = face.getLandmark(FaceLandmark.LEFT_EAR);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(leftEar.getPosition().x, leftEar.getPosition().y, 8f, landmarkPaint);
}
if(face.getLandmark(FaceLandmark.RIGHT_EAR) != null){
FaceLandmark rightEar = face.getLandmark(FaceLandmark.RIGHT_EAR);
//Now we have our left eye, we draw a little circle
canvas.drawCircle(rightEar.getPosition().x, rightEar.getPosition().y, 8f, landmarkPaint);
}
if(face.getLandmark(FaceLandmark.MOUTH_LEFT) != null && face.getLandmark(FaceLandmark.MOUTH_BOTTOM) != null && face.getLandmark(FaceLandmark.MOUTH_RIGHT) != null){
FaceLandmark mouthLeft = face.getLandmark(FaceLandmark.MOUTH_LEFT);
FaceLandmark mouthRight = face.getLandmark(FaceLandmark.MOUTH_RIGHT);
FaceLandmark mouthBottom = face.getLandmark(FaceLandmark.MOUTH_BOTTOM);
//Now we have our left eye, we draw a little circle
canvas.drawLine(mouthLeft.getPosition().x, mouthLeft.getPosition().y, mouthBottom.getPosition().x, mouthBottom.getPosition().y, landmarkPaint);
canvas.drawLine(mouthBottom.getPosition().x, mouthBottom.getPosition().y, mouthRight.getPosition().x, mouthRight.getPosition().y, landmarkPaint);
}
faceDetectionModels.add(new FaceDetectionModel(i, "Smiling probability"
+ face.getSmilingProbability()));
faceDetectionModels.add(new FaceDetectionModel(i, "Left eye open probability"
+ face.getLeftEyeOpenProbability()));
faceDetectionModels.add(new FaceDetectionModel(i, "Right eye open probability"
+ face.getRightEyeOpenProbability()));
}
}
private void showProgress() {
findViewById(R.id.bottom_sheet_button_img).setVisibility(View.GONE);
findViewById(R.id.bottom_sheet_butotn_progress_bar).setVisibility(View.VISIBLE);
}
private void hideProgress() {
findViewById(R.id.bottom_sheet_button_img).setVisibility(View.VISIBLE);
findViewById(R.id.bottom_sheet_butotn_progress_bar).setVisibility(View.GONE);
}
//real-time detection starts HERE
#Override
public void process(#NonNull Frame frame) {
//setting up width and frame height
int width = frame.getSize().getWidth();
int height = frame.getSize().getHeight();
byte[] byteArray = frame.getData();
InputImage image = InputImage.fromByteArray(
//frame.getData()
byteArray,
width,
height,
//rotation
(cameraFacing == Facing.FRONT) ? 90 : 270,
//image format
InputImage.IMAGE_FORMAT_YV12 // or IMAGE_FORMAT_YV12
);
//Contour mode all is real time contour detection
FaceDetectorOptions realTimeOpts = new FaceDetectorOptions.Builder()
.setContourMode(FaceDetectorOptions.CONTOUR_MODE_ALL)
.build();
FaceDetector faceDetector = FaceDetection.getClient(realTimeOpts);
faceDetector.process(image).addOnSuccessListener(new OnSuccessListener<List<Face>>() {
#Override
public void onSuccess(#NonNull List<Face> faces) {
//don't have image yet set to null first
imageView.setImageBitmap(null);
//bitmap stores pixels of image
Bitmap bitmap = Bitmap.createBitmap(height, width, Bitmap.Config.ARGB_8888);
//canvas hold the draw calls -- write into the bitmap
Canvas canvas = new Canvas(bitmap);
//paint specifies what the canvas should draw
Paint dotPaint = new Paint();
dotPaint.setColor(Color.YELLOW);
dotPaint.setStyle(Paint.Style.FILL);
dotPaint.setStrokeWidth(6f);
Paint linePaint = new Paint();
linePaint.setColor(Color.GREEN);
linePaint.setStyle(Paint.Style.STROKE);
linePaint.setStrokeWidth(4f);
for (Face face : faces) {
//fetching contours
List<PointF> faceContours = face.getContour(
FaceContour.FACE
).getPoints();
for (int i = 0; i < faceContours.size(); i++) {
PointF faceContour = faceContours.get(i);
if (i != (faceContours.size() - 1)) {
canvas.drawLine(
//if not at last index, continue drawing to next index
faceContour.x, faceContour.y, faceContours.get(i + 1).x, faceContours.get(i + 1).y, linePaint
);
} else {
return;
}
//always draw circle
canvas.drawCircle(faceContour.x, faceContour.y, 4f, dotPaint);
}//end inner loop
List<PointF> leftEyebrowTopCountours = face.getContour(
FaceContour.LEFT_EYEBROW_TOP).getPoints();
for (int i = 0; i < leftEyebrowTopCountours.size(); i++) {
PointF leftEyebrowTopContour = leftEyebrowTopCountours.get(i);
if (i != (leftEyebrowTopCountours.size() - 1)) {
canvas.drawLine(leftEyebrowTopContour.x, leftEyebrowTopContour.y, leftEyebrowTopCountours.get(i + 1).x, leftEyebrowTopCountours.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(leftEyebrowTopContour.x, leftEyebrowTopContour.y, 4f, dotPaint);
}
List<PointF> rightEyebrowTopCountours = face.getContour(
FaceContour.RIGHT_EYEBROW_TOP).getPoints();
for (int i = 0; i < rightEyebrowTopCountours.size(); i++) {
PointF rightEyebrowContour = rightEyebrowTopCountours.get(i);
if (i != (rightEyebrowTopCountours.size() - 1)) {
canvas.drawLine(rightEyebrowContour.x, rightEyebrowContour.y, rightEyebrowTopCountours.get(i + 1).x, rightEyebrowTopCountours.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(rightEyebrowContour.x, rightEyebrowContour.y, 4f, dotPaint);
}
List<PointF> rightEyebrowBottomCountours = face.getContour(
FaceContour.RIGHT_EYEBROW_BOTTOM).getPoints();
for (int i = 0; i < rightEyebrowBottomCountours.size(); i++) {
PointF rightEyebrowBottomContour = rightEyebrowBottomCountours.get(i);
if (i != (rightEyebrowBottomCountours.size() - 1)) {
canvas.drawLine(rightEyebrowBottomContour.x, rightEyebrowBottomContour.y, rightEyebrowBottomCountours.get(i + 1).x, rightEyebrowBottomCountours.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(rightEyebrowBottomContour.x, rightEyebrowBottomContour.y, 4f, dotPaint);
}
List<PointF> leftEyeContours = face.getContour(
FaceContour.LEFT_EYE).getPoints();
for (int i = 0; i < leftEyeContours.size(); i++) {
PointF leftEyeContour = leftEyeContours.get(i);
if (i != (leftEyeContours.size() - 1)) {
canvas.drawLine(leftEyeContour.x, leftEyeContour.y, leftEyeContours.get(i + 1).x, leftEyeContours.get(i + 1).y, linePaint);
} else {
return;
}
canvas.drawCircle(leftEyeContour.x, leftEyeContour.y, 4f, dotPaint);
}
List<PointF> rightEyeContours = face.getContour(
FaceContour.RIGHT_EYE).getPoints();
for (int i = 0; i < rightEyeContours.size(); i++) {
PointF rightEyeContour = rightEyeContours.get(i);
if (i != (rightEyeContours.size() - 1)) {
canvas.drawLine(rightEyeContour.x, rightEyeContour.y, rightEyeContours.get(i + 1).x, rightEyeContours.get(i + 1).y, linePaint);
} else {
return;
}
canvas.drawCircle(rightEyeContour.x, rightEyeContour.y, 4f, dotPaint);
}
List<PointF> upperLipTopContour = face.getContour(
FaceContour.UPPER_LIP_TOP).getPoints();
for (int i = 0; i < upperLipTopContour.size(); i++) {
PointF upperLipContour = upperLipTopContour.get(i);
if (i != (upperLipTopContour.size() - 1)) {
canvas.drawLine(upperLipContour.x, upperLipContour.y,
upperLipTopContour.get(i + 1).x,
upperLipTopContour.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(upperLipContour.x, upperLipContour.y, 4f, dotPaint);
}
List<PointF> upperLipBottomContour = face.getContour(
FaceContour.UPPER_LIP_BOTTOM).getPoints();
for (int i = 0; i < upperLipBottomContour.size(); i++) {
PointF upBottom = upperLipBottomContour.get(i);
if (i != (upperLipBottomContour.size() - 1)) {
canvas.drawLine(upBottom.x, upBottom.y, upperLipBottomContour.get(i + 1).x, upperLipBottomContour.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(upBottom.x, upBottom.y, 4f, dotPaint);
}
List<PointF> lowerLipTopContour = face.getContour(
FaceContour.LOWER_LIP_TOP).getPoints();
for (int i = 0; i < lowerLipTopContour.size(); i++) {
PointF lowerTop = lowerLipTopContour.get(i);
if (i != (lowerLipTopContour.size() - 1)) {
canvas.drawLine(lowerTop.x, lowerTop.y, lowerLipTopContour.get(i + 1).x, lowerLipTopContour.get(i + 1).y, linePaint);
}
else{
return;
}
canvas.drawCircle(lowerTop.x, lowerTop.y, 4f, dotPaint);
}
List<PointF> lowerLipBottomContour = face.getContour(
FaceContour.LOWER_LIP_BOTTOM).getPoints();
for (int i = 0; i < lowerLipBottomContour.size(); i++) {
PointF lowerBottom = lowerLipBottomContour.get(i);
if (i != (lowerLipBottomContour.size() - 1)) {
canvas.drawLine(lowerBottom.x, lowerBottom.y, lowerLipBottomContour.get(i + 1).x, lowerLipBottomContour.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(lowerBottom.x, lowerBottom.y, 4f, dotPaint);
}
List<PointF> noseBridgeContours = face.getContour(
FaceContour.NOSE_BRIDGE).getPoints();
for (int i = 0; i < noseBridgeContours.size(); i++) {
PointF noseBridge = noseBridgeContours.get(i);
if (i != (noseBridgeContours.size() - 1)) {
canvas.drawLine(noseBridge.x, noseBridge.y, noseBridgeContours.get(i + 1).x, noseBridgeContours.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(noseBridge.x, noseBridge.y, 4f, dotPaint);
}
List<PointF> noseBottomContours = face.getContour(
FaceContour.NOSE_BOTTOM).getPoints();
for (int i = 0; i < noseBottomContours.size(); i++) {
PointF noseBottom = noseBottomContours.get(i);
if (i != (noseBottomContours.size() - 1)) {
canvas.drawLine(noseBottom.x, noseBottom.y, noseBottomContours.get(i + 1).x, noseBottomContours.get(i + 1).y, linePaint);
}else{
return;
}
canvas.drawCircle(noseBottom.x, noseBottom.y, 4f, dotPaint);
//facing front flip image
if (cameraFacing == Facing.FRONT) {
//Flip image!
Matrix matrix = new Matrix();
matrix.preScale(-1f, 1f);
Bitmap flippedBitmap = Bitmap.createBitmap(bitmap, 0, 0,
bitmap.getWidth(), bitmap.getHeight(),
matrix, true);
imageView.setImageBitmap(flippedBitmap);
} else {
imageView.setImageBitmap(bitmap);
}
}//end outer loop
canvas.save();
}
}
}).addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure (#NonNull Exception e){
imageView.setImageBitmap(null);
}
});
}
}
Edit: I am getting this error
Getting this error now: 2021-04-27 19:12:05.335 538-1065/system_process E/JavaBinder: *** Uncaught remote exception! (Exceptions are not yet supported across processes.)
java.lang.RuntimeException: android.os.RemoteException: Couldn't get ApplicationInfo for package android.frameworks.sensorservice#1.0::ISensorManager
at android.os.Parcel.writeException(Parcel.java:2158)
at android.os.Binder.execTransactInternal(Binder.java:1178)
at android.os.Binder.execTransact(Binder.java:1123)
Caused by: android.os.RemoteException: Couldn't get ApplicationInfo for package android.frameworks.sensorservice#1.0::ISensorManager
at com.android.server.pm.PackageManagerService$PackageManagerNative.getTargetSdkVersionForPackage(PackageManagerService.java:23957)
at android.content.pm.IPackageManagerNative$Stub.onTransact(IPackageManagerNative.java:255)
at android.os.Binder.execTransactInternal(Binder.java:1159)
at android.os.Binder.execTransact(Binder.java:1123) 
Thank you so much!!
if you want to use Camera streaming output with MLKit, you can use the CameraX ImageAnalysis use case. It produces android.media.Image with YUV_420_888 format which can be directly converted to mlkit InputImage.
Alternatively, you can also use the CameraXSource library that ML Kit has just published. The sample code is here. This eliminates the boilerplate code for you to set up camerax usecases and creates MLKit Inputs for you internally from the cameraX output. Note that this is still an beta SDK. We are looking forward to your feedback.
In order to use the API, you need to add the following dependency into your app:
implementation 'com.google.mlkit:camera:16.0.0-beta1'

OpenCV - Android : java.lang.IllegalArgumentException: bmp == null

I'm trying to capture an image from the JavaCameraView and load the captured image into another activity and is supposed to be processed (Hough Circles).
private void takePhoto(final Mat rgba) {
// Determine the path and metadata for the photo.
final long currentTimeMillis = System.currentTimeMillis();
final String appName = getString(R.string.app_name);
final String galleryPath =
Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES).toString();
final String albumPath = galleryPath + File.separator +
appName;
final String photoPath = albumPath + File.separator +
currentTimeMillis + LabActivity.PHOTO_FILE_EXTENSION;
final ContentValues values = new ContentValues();
values.put(MediaStore.MediaColumns.DATA, photoPath);
values.put(Images.Media.MIME_TYPE,
LabActivity.PHOTO_MIME_TYPE);
values.put(Images.Media.TITLE, appName);
values.put(Images.Media.DESCRIPTION, appName);
values.put(Images.Media.DATE_TAKEN, currentTimeMillis);
// Ensure that the album directory exists.
File album = new File(albumPath);
if (!album.isDirectory() && !album.mkdirs()) {
Log.e(TAG, "Failed to create album directory at " +
albumPath);
onTakePhotoFailed();
return;
}
/*
// Try to create the photo.
Imgproc.cvtColor(rgba, mBgr, Imgproc.COLOR_RGBA2BGR, 3);
if (!Imgcodecs.imwrite(photoPath, mBgr)) {
Log.e(TAG, "Failed to save photo to " + photoPath);
onTakePhotoFailed();
}
Log.d(TAG, "Photo saved successfully to " + photoPath);
*/
Mat grayMat = new Mat();
Mat cannyEdges = new Mat();
Mat lines = new Mat();
Imgproc.cvtColor(rgba, mBgr, Imgproc.COLOR_RGBA2BGR, 3);
//Converting the image to grayscale
Imgproc.cvtColor(mBgr, grayMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.Canny(grayMat, cannyEdges, 10, 100);
Imgproc.HoughLinesP(cannyEdges, lines, 1, Math.PI / 180, 50, 20, 20);
Mat houghLines = new Mat();
houghLines.create(cannyEdges.rows(), cannyEdges.cols(), CvType.CV_8UC1);
//Drawing lines on the image
for (int i = 0; i < lines.cols(); i++) {
double[] points = lines.get(0, i);
double x1, y1, x2, y2;
x1 = points[0];
y1 = points[1];
x2 = points[2];
y2 = points[3];
Point pt1 = new Point(x1, y1);
Point pt2 = new Point(x2, y2);
//Drawing lines on an image
Imgproc.line(houghLines, pt1, pt2, new Scalar(255, 0, 0), 1);
}
//Converting Mat back to Bitmap
Utils.matToBitmap(houghLines, currentBitmap);
Log.d(TAG, "Photo saved successfully to " + photoPath);
// Try to insert the photo into the MediaStore.
Uri uri;
try {
uri = getContentResolver().insert(
Images.Media.EXTERNAL_CONTENT_URI, values);
} catch (final Exception e) {
Log.e(TAG, "Failed to insert photo into MediaStore");
e.printStackTrace();
// Since the insertion failed, delete the photo.
File photo = new File(photoPath);
if (!photo.delete()) {
Log.e(TAG, "Failed to delete non-inserted photo");
}
onTakePhotoFailed();
return;
}
// Open the photo in LabActivity.
final Intent intent = new Intent(this, LabActivity.class);
intent.putExtra(LabActivity.EXTRA_PHOTO_URI, uri);
intent.putExtra(LabActivity.EXTRA_PHOTO_DATA_PATH,
photoPath);
runOnUiThread(new Runnable() {
#Override
public void run() {
startActivity(intent);
}
});
}
The error occurs after i click the capture option.
12-07 00:15:45.420 9205-9933/? E/AndroidRuntime﹕ FATAL EXCEPTION: Thread-8672
Process: com.example.alexies.cameratesting, PID: 9205
java.lang.IllegalArgumentException: bmp == null
at org.opencv.android.Utils.matToBitmap(Utils.java:122)
at org.opencv.android.Utils.matToBitmap(Utils.java:132)
at com.example.alexies.cameratesting.MainActivity.takePhoto(MainActivity.java:380)
currentBitmap is null in your code.
Either you didn't copy the part where it's assigned a bitmap value or it's never assigned. If there's some part of your code missing please add it in your question, if not, your problem is that you never get the bitmap.
EDIT
You never initiate currentBitmap. The docs state that the provided bitmap must be the same size as the Mat object (your houghLines) and the type of your bitmap should be ARGB_8888 or RGB_565.

BitmapFactory Not working in Android

This line of code always returns null after cropping:
this.bmp = BitmapFactory.decodeFile(this.imageFileUri.getPath());
This is my crop method:
private void performCrop() {
this.imageFileUri = Uri.fromFile(this.file);
Intent var1 = new Intent("com.android.camera.action.CROP");
var1.setDataAndType(this.imageFileUri, "image/*");
System.out.println(this.imageFileUri.getPath());
var1.putExtra("crop", "true");
var1.putExtra("scale", true);
var1.putExtra("return-data", true);
// var1.putExtra("output", this.imageFileUri);
intent.putExtra(MediaStore.EXTRA_OUTPUT, this.imageFileUri);
this.startActivityForResult(var1, 1);
}
I have tried by using a custom method to return the Bitmap it still returns null. This is the method
public static Bitmap decodeFile(String path) {
int orientation;
try {
if (path == null) {
return null;
}
// decode image size
BitmapFactory.Options o = new BitmapFactory.Options();
o.inJustDecodeBounds = true;
// Find the correct scale value. It should be the power of 2.
final int REQUIRED_SIZE = 70;
int width_tmp = o.outWidth, height_tmp = o.outHeight;
int scale = 1;
BitmapFactory.Options o2 = new BitmapFactory.Options();
o2.inSampleSize = scale;
Bitmap bm = BitmapFactory.decodeFile(path, o2);
Bitmap bitmap = bm;
ExifInterface exif = new ExifInterface(path);
orientation = exif
.getAttributeInt(ExifInterface.TAG_ORIENTATION, 1);
Log.e("ExifInteface .........", "rotation =" + orientation);
// exif.setAttribute(ExifInterface.ORIENTATION_ROTATE_90, 90);
Log.e("orientation", "" + orientation);
Matrix m = new Matrix();
if ((orientation == ExifInterface.ORIENTATION_ROTATE_180)) {
m.postRotate(180);
// m.postScale((float) bm.getWidth(), (float) bm.getHeight());
// if(m.preRotate(90)){
Log.e("in orientation", "" + orientation);
bitmap = Bitmap.createBitmap(bm, 0, 0, bm.getWidth(),
bm.getHeight(), m, true);
return bitmap;
} else if (orientation == ExifInterface.ORIENTATION_ROTATE_90) {
m.postRotate(90);
Log.e("in orientation", "" + orientation);
bitmap = Bitmap.createBitmap(bm, 0, 0, bm.getWidth(),
bm.getHeight(), m, true);
return bitmap;
} else if (orientation == ExifInterface.ORIENTATION_ROTATE_270) {
m.postRotate(270);
Log.e("in orientation", "" + orientation);
bitmap = Bitmap.createBitmap(bm, 0, 0, bm.getWidth(),
bm.getHeight(), m, true);
return bitmap;
}
return bitmap;
} catch (Exception e) {
return null;
}
}
And calling the above method this way:
this.bmp = MyClass.decodeFile(this.imageFileUri.getPath());
Kindly assist!

Adding jpg images together using java

I am trying to take severial jpg images with the same dimensions(30*30) and create a single image. Like this:
Image i = new BufferedImage(30, 30, BufferedImage.TYPE_INT_ARGB);
Graphics2D g2 = (Graphics2D) i.getGraphics();
if (this instanceof Node) {
Image img;
img = getImageFromFile(Node.icon);
g2.drawImage(img, 0, 0, null);
}
if(this instanceof ForceNode){
Image img;
img = getImageFromFile(ForceNode.forceicon);
g2.drawImage(img, 0, 0, null);
}
if(this instanceof TunnelNode){
Image img;
img = getImageFromFile(TunnelNode.tunnelicon);
g2.drawImage(img, 0, 0, null);
}
....
public Image getImageFromFile(File file) {
Image image = null;
try {
image = ImageIO.read(file);
} catch (IOException e) {
Logger.getLogger(HackerGame.class.getName()).log(Level.SEVERE, null, e);
return null;
}
return image;
}
I realize there are some issues with G2D not being strictly necessary, but my issue is this:
These images needs to be put on top of each other to create a whole image. Eash of the images are small areas of the whole picture, that needs to be put on top of(Not next to) each other to create the actual image. The problem right now however is that the last drawImage method overwrites the entire image, so i am left with the last "bit of image" instead of my compiled image.
I suspect this is because the white areas of my pictures are not being treated as transparent, but how do i get around this. I have next to no experience with image encoding so i am sort of going by trial and error:)
Anyway HELP!
Solution:
public void generateIcon() {
BufferedImage i = new BufferedImage(30, 30, BufferedImage.TYPE_INT_ARGB);
if (this instanceof Node) {
i = compileImages(i, Node.icon);
}
if(this instanceof ForceNode){
i = compileImages(i, ForceNode.forceicon);
}
if(this instanceof TunnelNode){
i = compileImages(i, TunnelNode.tunnelicon);
}
if (this instanceof EntranceNode) {
i = compileImages(i, EntranceNode.entranceicon);
}
if (this instanceof NetworkNode) {
i = compileImages(i, NetworkNode.networkicon);
}
if(this instanceof DataNode){
i = compileImages(i, DataNode.dataicon);
}
//if(this instanceof )
nodeicon = i;
}
public BufferedImage compileImages(BufferedImage image, File f) {
BufferedImage im = null;
try {
im = ImageIO.read(f);
for(int i = 0 ; i<image.getWidth();i++){
for(int j = 0 ; j<image.getHeight();j++){
int rgb = im.getRGB(i, j);
//System.out.println(i + " " + j + " " + rgb);
if(!(rgb < 1 && rgb > -2)){
image.setRGB(i, j, rgb);
//System.out.println("Printing " + i + " " + j + " " + rgb);
}
}
}
} catch (IOException e) {
Logger.getLogger(HackerGame.class.getName()).log(Level.SEVERE, null, e);
return null;
}
return image;
}
Iterate the pixels of the source images. If they are not white (use getRGB(x,y)to compare them), write them to the destination image.

Categories