I'm having a problem with android's camera2 API.
My end goal here is to have a byte array which I can edit using opencv, whilst displaying the preview to the user (e.g. an OCR with a preview).
I've create a capture request and added an ImageReader as a target. Then on the OnImageAvailableListener, i'm getting the image, transforming it to a bitmap and then display it on an ImageView (and rotating it).
My problem is that after a few seconds, the preview stalls (after gradually slowing down) and in the log om getting the following error: E/BufferItemConsumer: [ImageReader-1225x1057f100m2-18869-0] Failed to release buffer: Unknown error -1 (1)
As you can see in my code, I have already tried closing the img after getting my byte[] from it.
I've also tried clearing the buffer.
I've tried closing the ImageReader but that of course stopped me from getting any further images (throws an exception).
Can anyone please help me understand what im doing wrong? I've been scouring google to no avail.
This is my OnImageAvailableListener, do let me know if you need more of my code to assist:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = reader.acquireLatestImage();
final ImageView iv = findViewById(R.id.camPrev);
try{
if (img==null) throw new NullPointerException("null img");
ByteBuffer buffer = img.getPlanes()[0].getBuffer();
byte[] data = new byte[buffer.remaining()];
buffer.get(data);
final Bitmap b = BitmapFactory.decodeByteArray(data, 0, data.length);
runOnUiThread(new Runnable() {
#Override
public void run() {
iv.setImageBitmap(b);
iv.setRotation(90);
}
});
} catch (NullPointerException ex){
showToast("img is null");
}finally {
if(img!=null)
img.close();
}
}
};
Edit - adding cameraStateCallback
private CameraDevice.StateCallback mCameraDeviceStateCallback = new CameraDevice.StateCallback() {
#Override
public void onOpened(CameraDevice cameraDevice) {
mCameraDevice = cameraDevice;
showToast("Connected to camera!");
createCameraPreviewSession();
}
#Override
public void onDisconnected(CameraDevice cameraDevice) {
closeCamera();
}
#Override
public void onError(CameraDevice cameraDevice, int i) {
closeCamera();
}
};
private void closeCamera() {
if (mCameraDevice != null) {
mCameraDevice.close();
mCameraDevice = null;
}
}
You seem to have used setRepeatingRequest() for Jpeg format. This may not be fully supported on your device, also depends on the image resolution that you choose. Normally, we use createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW) in these cases, and get YUV or raw format from ImageReader.
I would try to choose low resolution for Jpeg: maybe this will be enough to keep the ImageReader running.
Related
I am building an app which uses Camera2API to take pictures. The thing is I need the Camera to take a picture without needing a preview. So far, I managed to do it by dumping (and adapting) the code from an activity into a service and it works like a charm, except for the fact that it is not focusing. On previous versions I had a state machine in charge of that focusing on the preview by means of a separate CaptureRequest.Builder, but I can't make it work without creating a new CaptureRequest.Builder on the service.
I followed this topic on the following stackoverflow discussion How to lock focus in camera2 api, android? but I did not manage to make it work.
My code does the following:
First I create a camera session once the camera has been opened.
public void createCameraSession() {
try {
// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(Arrays.asList(imageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
camera2TakePicture();
}
#Override
public void onConfigureFailed(
#NonNull CameraCaptureSession cameraCaptureSession) {
}
}, null
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
Then on that camera session I call my method "camera2TakePicture()":
protected void camera2TakePicture() {
if (null == cameraDevice) {
return;
}
try {
Surface readerSurface = imageReader.getSurface();
List<Surface> outputSurfaces = new ArrayList<Surface>(2);
outputSurfaces.add(readerSurface);
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(readerSurface);
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START);
//MeteringRectangle meteringRectangle = getAFRegion();
//captureBuilder.set(CaptureRequest.CONTROL_AF_REGIONS, new MeteringRectangle[] {meteringRectangle});
/**** TO BE USED ONCE SAMSUNG TABLETS HAVE BEEN REPLACED ****/
boolean samsungReplaced = false;
if(Boolean.parseBoolean(getPreferenceValue(this, "manualCamSettings"))) {
int exposureCompensation = Integer.parseInt(getPreferenceValue(this, "exposureCompensation"));
captureBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, exposureCompensation);
if(samsungReplaced) {
//Exposure
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CameraMetadata.CONTROL_AE_MODE_OFF);
Float shutterSpeed = 1 / Float.parseFloat(getPreferenceValue(this, "camSSpeed"));
Long exposureTimeInNanoSec = new Long(Math.round(shutterSpeed * Math.pow(10, 9)));
captureBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, exposureTimeInNanoSec);
captureBuilder.set(CaptureRequest.SENSOR_FRAME_DURATION, 10 * exposureTimeInNanoSec);
//ISO
int ISO = Integer.parseInt(getPreferenceValue(this, "camISO"));
captureBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, ISO);
//Aperture
Float aperture = Float.parseFloat(getPreferenceValue(this, "camAperture"));
captureBuilder.set(CaptureRequest.LENS_APERTURE, aperture);
}
}
// Orientation
WindowManager window = (WindowManager) getSystemService(Context.WINDOW_SERVICE);
Display display = window.getDefaultDisplay();
int rotation = display.getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
super.onCaptureCompleted(session, request, result);
while(result.get(CaptureResult.CONTROL_AF_STATE) != CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED){
System.out.println("Not focused");
}
System.out.println("Focused");
}
};
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_IDLE);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
As you can see, I set the CONTROL_AF_MODE to AUTO then start the AF_TRIGGER and launch the capture. I add a check on onCaptureCompleted() but the AF_STATE never seems to be on FOCUSED_LOCKED. It stays on ACTIVE_SCAN.
What am I doing wrong?
In your code snippet, you've stopped the repeating request, and issue one capture request for the still image, but just one.
Do you then go on to restart the repeating request? If you don't, there are no frames flowing through the camera, and AF cannot make progress.
So if you want to lock AF before you take a picture, you want to
Set AF_TRIGGER to START for a single capture only
Run preview until you get AE_STATE out of ACTIVE_SCAN
Issue single capture for still image.
Being in the background or foreground doesn't really change any of this.
I've been trying to run a Tensorflow model on android. The solution to do this was to create a tensorflow model(I used a pretrained Mobilenetv2 model) first. After training it on my own dataset, I converted it to a .tflite model which is supported by Android. Since I want to work with realtime video analysis, I am also using OpenCV library built for Android SDK.
Now the part where I'm currently stuck is - how to convert the inputframe received by opencv JavaCameraView and feed it to the tflite model for inference? I found few solutions to convert Mat datatype to an Input Tensor but nothing seems clear. Can someone help me out with this?
edit : Here's the code(need help with onCameraFrame Method below)
public class MainActivity extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2 {`enter code here`
CameraBridgeViewBase cameraBridgeViewBase;
BaseLoaderCallback baseLoaderCallback;
// int counter = 0;
Interpreter it;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
cameraBridgeViewBase = (JavaCameraView)findViewById(R.id.CameraView);
cameraBridgeViewBase.setVisibility(SurfaceView.VISIBLE);
cameraBridgeViewBase.setCvCameraViewListener(this);
try{
it=new Interpreter(loadModelFile(this));
}
catch(Exception e){
Toast.makeText(this,"Tf model didn't load",Toast.LENGTH_LONG).show();
}
//System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
baseLoaderCallback = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
super.onManagerConnected(status);
switch(status){
case BaseLoaderCallback.SUCCESS:
cameraBridgeViewBase.enableView();
break;
default:
super.onManagerConnected(status);
break;
}
}
};
}
private MappedByteBuffer loadModelFile(Activity activity) throws IOException {
AssetFileDescriptor fileDescriptor = activity.getAssets().openFd("model.tflite");
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
//how to convert inputFrame to Input Tensor???
}
#Override
public void onCameraViewStarted(int width, int height) {
}
#Override
public void onCameraViewStopped() {
}
#Override
protected void onResume() {
super.onResume();
if (!OpenCVLoader.initDebug()){
Toast.makeText(getApplicationContext(),"There's a problem, yo!", Toast.LENGTH_SHORT).show();
}
else
{
baseLoaderCallback.onManagerConnected(baseLoaderCallback.SUCCESS);
}
}
#Override
protected void onPause() {
super.onPause();
if(cameraBridgeViewBase!=null){
cameraBridgeViewBase.disableView();
}
}
#Override
protected void onDestroy() {
super.onDestroy();
if (cameraBridgeViewBase!=null){
cameraBridgeViewBase.disableView();
}
}
}
I suggest that you convert Mat into FloatBuffer as follows:
Mat floatMat = new Mat();
mat.convertTo(floatMat, CV_32F);
FloatBuffer floatBuffer = floatMat.createBuffer();
Note that the createBuffer method is found within the Mat class of the import org.bytedeco.opencv.opencv_core.Mat not the import org.opencv.core.
Then you can create a tensor from the floatBuffer variable:
Tensor.create(new long[]{1, image_height, image_width, 3}, floatBuffer)
This creates a tensor that contains a batch of one image (as indicated by the number 1 on the far left), with an image of dimensions (image_height, image_width, 3) which you should know and replace. Most image processing and machine learning libraries use the first dimension for the height of the image or the "rows" and the second for the width or the "columns" and the third for the number of channels (RGB = 3 channels). If you have a grayscale image, then replace 3 by 1.
Please check whether you can directly feed this tensor to your model or you have to perform some pre-processing steps first such as normalization.
I needed to help with android camera 2 api automatic flash.
This solution works on one phone but not on the other.
I've spent a few hours searching for solutions, but I'm unsuccessful.
My takePhoto code:
pictureTaken = false;
if (null == cameraDevice) {
Log.e(TAG, "cameraDevice is null");
return;
}
CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
try {
int width = 1024;
int height = 768;
cv.setBackground(getResources().getDrawable(R.drawable.fotak_zeleny));
ImageReader reader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 1);
List<Surface> outputSurfaces = new ArrayList<Surface>(2);
outputSurfaces.add(reader.getSurface());
outputSurfaces.add(new Surface(textureView.getSurfaceTexture()));
final CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(reader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_START);
if (flashMode == FLASH_AUTO) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
} else if (flashMode == FLASH_ON) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_ALWAYS_FLASH);
} else if (flashMode == FLASH_OFF) {
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
}
// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
final File file = new File(fileName);
if (file.exists()) {
file.delete();
}
etc....
My create camera preview code:
protected void createCameraPreview() {
try {
SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(imageDimension.getWidth(), imageDimension.getHeight());
Surface surface = new Surface(texture);
captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_LOCK, false);
captureRequestBuilder.addTarget(surface);
cameraDevice.createCaptureSession(Arrays.asList(surface), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
//The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
cameraCaptureSessions = cameraCaptureSession;
updatePreview();
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException ex) {
ex.printStackTrace();
}
}
This solution work fine on my LG phone, but on alcatel not work.
I tried a lot of ideas that are written here, unsuccessfully.
Can help me please?
Big thanks
(Sorry for my English)
You should set your selected AE_MODE for the preview request as well, and update it whenever the user switches flash modes. In addition, you need to run the precapture sequence on any devices that are higher than LEGACY level.
Changing flash mode for just the single still capture request won't work correctly, since the phone won't have the opportunity to fire a preflash to properly calculate flash power.
Take a look at camera2basic for running the precapture sequence. It always sets AE mode to AE_MODE_AUTO_FLASH if possible, but the same code will work fine for the other flash modes (though you can skip the precapture sequence if flash is set to OFF, generally, as long as focus quality is OK).
If you Command click on CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH, you will see this:
The flash may be fired during a precapture sequence
(triggered by {#link CaptureRequest#CONTROL_AE_PRECAPTURE_TRIGGER android.control.aePrecaptureTrigger}) and
may be fired for captures for which the
{#link CaptureRequest#CONTROL_CAPTURE_INTENT android.control.captureIntent} field is set to STILL_CAPTURE
That means you have to trigger the the precapture sequence first before capturing the picture.
Can look at the Google's sample app here for the detailed implementation, https://github.com/google/cameraview
I use default camera application because this camera api is different on different phones.
I am building a image processing program using Android camera2. Since the image format of each captured frame is YUV_420_888, I need to transform it to RGB efficiently for image processing. I googled and read a lot (especially the following two links), and finally found that renderscript may be the solution. However, I don't know how to use the yuv2rgb script in my code.
http://werner-dittmann.blogspot.jp/2016/03/using-android-renderscript-to-convert.html
Convert android.media.Image (YUV_420_888) to Bitmap
Currently, I use the TextureView surface to show the preview, and use ImageReader to capture each YUV_420_888 frame in onImageAvailable function.
protected void createCameraPreview() {
try {
SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(imageDimension.getWidth(), imageDimension.getHeight());
Surface surface = new Surface(texture);
Surface mImageSurface = mImageReader.getSurface();
captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.addTarget(surface)
List surfaces = new ArrayList<>();
surfaces.add(surface);
surfaces.add(mImageSurface);
captureRequestBuilder.addTarget(mImageSurface);
cameraCaptureSessions.setRepeatingRequest(captureRequestBuilder.build(), null, mBackgroundHandler);
cameraDevice.createCaptureSession(surfaces, new CameraCaptureSession.StateCallback(){
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
//The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
cameraCaptureSessions = cameraCaptureSession;
updatePreview();
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
Toast.makeText(MainActivity.this, "Configuration change", Toast.LENGTH_SHORT).show();
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image img = null;
img = reader.acquireNextImage(); // we got YUV_420_888 frame here
// transform to RGB format here?
// image processing
}
};
How to update my codes to achieve the goal (e.g., using the yuv2rgb.rs)? Thanks.
The camera2 sample application HdrViewfinder, which uses RenderScript to do some image processing, may be helpful for how to connect up the camera and RenderScript: https://github.com/googlesamples/android-HdrViewfinder
It doesn't do YUV->RGB conversion, IIRC, and I think yuv2rgb.rs may be intended for a different YUV colorspace than what the camera produces (due to backwards-compatibility concerns - it existed before camera2). But it gets you to the point where you can write your own RS script to apply to camera data.
So Im having trouble using Microsoft's Emotion API for Android. I have no issues with regards to running the Face API; Im able to get the face rectangles but I am not able to get it working on the emotion api. I am taking images using the builtin Android camera itself. Here is the code I am using:
private void detectAndFrame(final Bitmap imageBitmap)
{
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
imageBitmap.compress(Bitmap.CompressFormat.PNG, 100, outputStream);
ByteArrayInputStream inputStream =
new ByteArrayInputStream(outputStream.toByteArray());
AsyncTask<InputStream, String, List<RecognizeResult>> detectTask =
new AsyncTask<InputStream, String, List<RecognizeResult>>() {
#Override
protected List<RecognizeResult> doInBackground(InputStream... params) {
try {
Log.e("i","Detecting...");
faces = faceServiceClient.detect(
params[0],
true, // returnFaceId
false, // returnFaceLandmarks
null // returnFaceAttributes: a string like "age, gender"
);
if (faces == null)
{
Log.e("i","Detection Finished. Nothing detected");
return null;
}
Log.e("i",
String.format("Detection Finished. %d face(s) detected",
faces.length));
ImageView imageView = (ImageView)findViewById(R.id.imageView);
InputStream stream = params[0];
com.microsoft.projectoxford.emotion.contract.FaceRectangle[] rects = new com.microsoft.projectoxford.emotion.contract.FaceRectangle[faces.length];
for (int i = 0; i < faces.length; i++) {
com.microsoft.projectoxford.face.contract.FaceRectangle rect = faces[i].faceRectangle;
rects[i] = new com.microsoft.projectoxford.emotion.contract.FaceRectangle(rect.left, rect.top, rect.width, rect.height);
}
List<RecognizeResult> result;
result = client.recognizeImage(stream, rects);
return result;
} catch (Exception e) {
Log.e("e", e.getMessage());
Log.e("e", "Detection failed");
return null;
}
}
#Override
protected void onPreExecute() {
//TODO: show progress dialog
}
#Override
protected void onProgressUpdate(String... progress) {
//TODO: update progress
}
#Override
protected void onPostExecute(List<RecognizeResult> result) {
ImageView imageView = (ImageView)findViewById(R.id.imageView);
imageView.setImageBitmap(drawFaceRectanglesOnBitmap(imageBitmap, faces));
MediaStore.Images.Media.insertImage(getContentResolver(), imageBitmap, "AnImage" ,"Another image");
if (result == null) return;
for (RecognizeResult res: result) {
Scores scores = res.scores;
Log.e("Anger: ", ((Double)scores.anger).toString());
Log.e("Neutral: ", ((Double)scores.neutral).toString());
Log.e("Happy: ", ((Double)scores.happiness).toString());
}
}
};
detectTask.execute(inputStream);
}
I keep getting the error Post Request 400, indicating some sort of issue with the JSON or the face rectangles. But I'm not sure where to start debugging this issue.
You're using the stream twice, so the second time around you're already at the end of the stream. So either you can reset the stream, or, simply call the emotion API without rectangles (ie skip the call to the face API.) The emotion API will determine the face rectangles for you.