I'm attempting to access the raw feed of android's front facing camera. By default, the front facing camera's preview is flipped horizontally so users can see themselves as if looking into a mirror - that's great, but not what I need. What's the best way to get the raw feed? Is there some way to disable the automatic flipping, or should I attempt to flip it in code myself? My application needs to display a real-time feed of the front facing camera without it being flipped like a mirror.
If you want to use a front-facing camera for barcode scanning you can use TextureView and apply a transformation matrix to it. When the texture is updated you can read the image data and use that.
See https://github.com/hadders/camera-reverse
Specifically from MainActivity.java
mCamera.setDisplayOrientation(90);
Matrix matrix = new Matrix();
matrix.setScale(-1, 1);
matrix.postTranslate(width, 0);
mTextureView.setTransform(matrix);
The data from the front camera is as the camera the sees it, looking at you. The left side of its image is your right side. I think this is what you want already? When put onto a SurfaceView it is flipped so it acts as you say, but that's a separate cosmetic transformation.
At least, this is how every device I've seen works and I've looked hard at this to implement front camera support in Barcode Scanner / Barcode Scanner+.
Related
I am creating an app where I need to apply filters to the my video while recording it. For example the way we apply filters to the video in Retrica app. I am using camera 2 api as I have not been able to find anything regarding this in Camera x.
Right now I am using the following way to apply filters to the video
captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.set (CaptureRequest.CONTROL_EFFECT_MODE, CaptureRequest.CONTROL_EFFECT_MODE_SEPIA);
There are some deafult values that we can use to apply such filters in camera2 api. What I want to do is also add custom filter effects to the video. How can I add such filters?
CONTROL_EFFECT_MODE has a small list of effects that the device implements somewhere in its camera code.
If you want to create your own, you need to send the camera data to somewhere you can process it. A common choice is using OpenGL ES or Vulkan for your image processing step. For that, you need to set up an OpenGL context, create a SurfaceTexture and bind it to a GL texture, and then send camera data to the Surface created from the SurfaceTexture. Then you can write a fragment shader that edits the texture and writes that out.
You can also do the editing in Java/native code using an ImageReader to get YUV_420_888 images.
All of that requires a lot of scaffolding of your own, unfortunately.
I've been following the Camera2 example (android/camera-samples/Camera2Video) to create an abstraction over the Camera2 library. The idea of my abstraction is to give the user the ability to use a Camera view from React Native, so it's just a <Camera> view which can be whatever size/aspect ratio.
While this works out of the box on iOS, I can't seem to get the preview on Android to display "what the camera sees" in the correct aspect ratio.
The official example from Android works like this:
They create a custom SurfaceView extension that should automatically fit to the correct aspect ratio (see: AutoFitSurfaceView)
They use that AutoFitSurfaceView in their layout
They add a listener to the AutoFitSurfaceView's Surface to find out when it has been created (source)
Once the Surface has been created, they call getPreviewOutputSize(...) to get the best matching camera preview size (e.g. so you don't stream 4k for a 1080p screen, that's wasted pixels)
Then they pass the best matching camera preview size to the AutoFitSurfaceView::setAspectRatio(...) function
By knowing the desired aspect ratio, the AutoFitSurfaceView should then automatically perform a center-crop transform in it's onMeasure override
If you read the source code of their getPreviewOutputSize(...) function, you might notice that this uses a Display to find the best matching preview size. If I understood the code correctly, this would only work if the camera preview (AutoFitSurfaceView) is exactly the same size as the device's screen. This is poorly designed, as there are lots of cases where that simply isn't true. In my case, the Camera has a bit of a bottom spacing/margin, so it doesn't fill the screen and therefore has weird resolutions (1080x1585 on a 1080x1920 screen)
With that long introduction, here comes my question: How do I actually perform a correct center crop transform (aka scaleType = CENTER_CROP) on my SurfaceView? I've tried the following:
Set the size of my SurfaceView using SurfaceHolder::setFixedSize(...), but that didn't change anything at all
Remove their getPreviewOutputSize(...) stuff and simply use the highest resolution available
Use the Android View properties scaleX and scaleY to scale the view by the aspect-ratio difference of the view <-> camera input scaler size (this somewhat worked, but is giving me errors if I try to use high-speed capture: Surface size 1080x1585 is not part of the high speed supported size list [1280x720, 1920x1080])
Any help appreciated!
i am making an android app where the camera preview part is circle so that it will only have to capture the face of the user and beside the circle part make other part black.enter image description here
That would imply making a custom camera yourself instead of using the default one.
Refer to this link for this: https://stackoverflow.com/a/15392209/7528995
You could also use the normal camera instead, since making the user fit their head inside the circled preview would be a bit inconvenient. After capturing the photo you could simply put it into a CircleImageView.
To do this you can refer to this answer: https://stackoverflow.com/a/36613446/7528995
I wish to know how can I use front camera with Xing library on android to scan 2D Qr code . I mean what what code is required ?
Thanks a lot
Vishal
In short, You'll have to fork the source and replace all camera related code with code that uses the front camera. If you are asking each and every instance where this needs to be done, then it is beyond the scope of StackOverflow.
Here is a thread that tells you how to use a front camera:
Android front camera
The rest you will have to do yourself and you can always come back if you have more specific problems.
I Would like to Refer you to Add option to flip image to accommodate front cameras that is related to QR Code and Front Camera issues.
Currently I am developing an application for decoding barcodes using mobile phones.
I have a problem with how to draw a line or a square on the camera screen to easily capture the barcode.
What is the easiest way of doing it?
Unfortunately this isn't as easy as it sounds. If you have a preview image from a phone's camera then it's often rendered as an overlay. This means that the camera preview image doesn't actually form any part of your application's canvas and you can't interact directly with the pixels. The phone simply draws the preview on top of your appliction, completely out of your control.
If you draw a line on your screen, then it will be drawn underneath the preview image.
The way around this isn't too pretty. You need to actually capture an image from the camera. Unfortunately this means capturing a JPEG or a PNG file into a byte buffer. You then load this image using Image.createImage and render that to the screen. You can then safely draw on top of that.
This also has the undesirable downside of giving you an appalling frame-rate. You might want to enumerate all the possible file formats you can capture in and try them all to see which one is quickest.
You can do this by using OverlayControl, assumming that your target devices support it.
I think i remember seeing a good example # Sony Ericsson developer forums.
Edit: found this (does not involve use of OverlayControl)