I am using a quickstart firebase MLKIT implementation with this link->
https://github.com/ankitjamuar/android-firebase-mlkit
but at the point when I found an image from the camera of MLKIT , it's resolution is 768*1024 .
but when I click the picture from my device native camera it gives me an 8 MP , i.e. 3264*2448 resolution image. so somehow I want to use this greater resolution image In MLKIT, which my device can capture as it's full capacity.so how can I increase MLKIT camera picture quality?
because when I get the face from the MLKIT camera picture, it also has low picture quality and I get lost my accuracy.One more thing here I am using the front camera for all.
please help, I got stuck , if possible I can get the accuracy for the face more reliable.
check StillImageActivity.java
check line 352 and 353
targetWidth = isLandScape ? 1024 : 768;
targetHeight = isLandScape ? 768 : 1024;
change the resolution as per your requirement.
Let me know if this solves your issue
Please check the official ML Kit Quickstart. The quickstart you link to is a pretty old fork.
Related
When using "General text recognition" from HIAI Engine I can't make it detect and return any text. For instance for the sample image it returns empty text but with code 200. I used an example program from the HIAI documentation, so I don't know where the problem is. So i created another app from scratch and results are the same.
I have figured something out, at least enough to make it work. Some of the images you are importing might be too large, and it throws a code 200, invalid format IE, the image height and width is too large. You will need to check if the height of the bitmap is over 2560 pixels and if the width is over 1440 and scale/crop it accordingly.
What I did:
Bitmap initClassifiedImg;
if(bitmap.getHeight()>2560 && bitmap.getWidth()>1440)
initClassifiedImg = Bitmap.createScaledBitmap(bitmap, 1440, 2560, true);
else if(bitmap.getHeight()>2560)
initClassifiedImg = Bitmap.createScaledBitmap(bitmap, bitmap.getWidth(), 2560, true);
else if (bitmap.getWidth()>1440)
initClassifiedImg = Bitmap.createScaledBitmap(bitmap, 1440, bitmap.getHeight(), true);
else
initClassifiedImg = Bitmap.createBitmap(bitmap);
Set this up to check for the bitmap and it should at the very least not generate a code 200 error
Do note that certain images will still fail to generate results. If the resultcode is 0 with no result, that means it just isn't recognizing the text in the image.
Recognition image output example
Sample image output
No result example log
HiAI General Text Recognition Service limits the size of the input image. If the image size exceeds the specified range, error code 200 is returned.
The maximum width and height of a screenshot are 1440 px and 15210 px respectively.
For photos taken by a camera, please use images with a resolution of 720p or higher, and standard photo size ratio of 2:1 or less.
You can also integrate Huawei ML Kit text recognition service with no restrictions on the image size. It applies to all Android phones without depending on the HMS.
The basic capabilities of Huawei HIAI include face recognition, image recognition, natural language processing, language recognition, code detection and so on. If the text in the picture is not recognized, it is recommended to use the picture with high contrast.
I'm coding my first android app, and my app has a video recording feature. So I did some research to get the "stuff" ready and run and managed to look into the google sample on github:
https://github.com/googlesamples/android-Camera2Video
So I took a deep dive into the code and yeah quite much just to make a video..
So I do have some restrictions to my camera feature and that is that it runs smoothly with some stable constant 25 Frames per Seconds...
And what I've seen so far there are some setters from the MediaRecorder class (see https://developer.android.com/reference/android/media/MediaRecorder.html)
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setOutputFile(mNextVideoAbsolutePath);
mMediaRecorder.setVideoEncodingBitRate(10000000);
mMediaRecorder.setCaptureRate(25);
mMediaRecorder.setVideoFrameRate(25);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
to set the frame rate, but hence those give a damn about my 25 fps - so generally the output "is like" 29,36 fps or 29,54 fps
I do did some resarch looked into my device support fps range and the P30 Pro does support 25 fps and the pixel 3 XL not. I don't even understand why there are such device differences (maybe someone can explain?).
So Camera2 API does provide so much things that there is a CaptureRequest.Builder and so I did some googling and set my captureRequestBuilder with those additional lines
closePreviewSession();
setUpMediaRecorder();
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, new Range<Integer>(25,25));
mPreviewBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_LOCK, false);
But nothing has changed... Is there somebody outhere who has managed to get a stable constant frame rate with the camera2 api?
I hope there is! Thank you if you have read so far!
EDIT Possible duplicates:
Camera2 MediaRecorder changes Frame Rate on Galaxy S9
How to use android camera2 api to record 60 fps video with fixed exposure time
EDIT For the sake of completeness:
Ranges from P30 Pro:
P30 Pro ranges
Ranges from Pixel 3 XL:
Pixel 3 XL ranges
I've been experimenting with ARCore for the past few months. I have read almost all the documentation. Talking in reference to the sample app, what I want to do is to extract the superimposed image from the app i.e a frame containing the camera texture and also the bots drawn by opengl (like a screenshot). In preview 2, they have provided TextureReader class which extracts just the camera texture. I've been trying a lot but haven't been able to succeed in getting the superimposed image. Is there a way to do it or is it just impossible?
Sample code specifically for the HelloAR sample to capture the image (and save it to the device) is in this answer: How to take picture with camera using ARCore
I think basically you want to have a screenshot from the OpenGL view. This question should help you: Screenshot on android OpenGL ES application
I am making an android app which is using the Google face API to detect faces of all the images in the gallery. It is taking a long time to process all the images and hence the apps get stuck for a long time. Any workaround?
I tried reducing the size of the image and then process, but it gives a faulty answer on it.
If you look in the documentation of the FaceDetector.Builder you will see that you can set some properties that will increase the speed.
E.g.:
public FaceDetector.Builder setProminentFaceOnly (boolean prominentFaceOnly)
Disable the image tracking :
FaceDetector detector = new FaceDetector.Builder(context)
.setTrackingEnabled(false)
.build();
It's true by default, and may slow the detection if you don't need this feature.
2 minutes for 715 images is a really good time.
Steps that can be taken:
enable fast mode in FaceDetector
set setTrackingEnabled to false if you don't want to track
set minimum face size to an appropriate size according to your dataset
Load the bitmaps using Universal Image Loader or glide library of Android. I used the UIL library.
640x480 is an optimum size for face detection and classification for scale down the size for less time and almost the same result.
Set setLandmarkType and setClassificationType according to your needs and disable if not required.
I'm trying to do an image capture on a high end Nokia phone (N95). The phone's internal camera is very good (4 megapixels) but in j2me I only seem to be able to get a maximum of 1360x1020 image out. I drew largely from this example http://developers.sun.com/mobility/midp/articles/picture/
What I did was start with 640x480 and increase the width and height by 80 and 60, respectively until it failed. The line of code is:
jpg = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=" + width + "&height=" + height);
So the two issues are:
1. The phone throws an exception when getting an image larger than 1360x1020.
2. The higher resolution images appear to be just smoothed versions of the smaller ones. E.g. When I take a 640x480 image and increase it in photoshop I can't tell the difference between this and one that's supposedly 1360x1020.
Is this a limitation of j2me on the phone? If so does anyone know of a way to get a higher resolution from within a j2me application and/or how to access the native camera from within another application?
This explanation on Nokia forum may help you.
It says that "The maximum image size that can be captured depends on selected image format, encoding options and free heap memory available."
and
"It is thus strongly adviced that at least larger images (larger than 1mpix) are captured as JPEG images and in a common image size (e.g. 1600x1200 for 2mpix an so on). Supported common image sizes are dependent on product and platform version."
So I suggest you to take some tries
1. with 1600x1200, 1024x768 and whatever image resolution your N95 guide mentions
2. with BMP and PNG as well.
Anyway, based on my earlier experiences (that could be outdated), j2me implementations are full of bugs, so there may not be a working solution to your problem.
Your cameras resolution is natively:
2582 x 1944 . Try capturing there to see how that goes.
This place:
http://developers.sun.com/mobility/midp/articles/picture/index.html
Mentions the use of:
byte[] raw = mVideoControl.getSnapshot(null);
Image image = Image.createImage(raw, 0, raw.length);
The use of raw seems interesting, to get the native resolution.
The 'quality' of a JPEG (As interpreted by the code) is nothing to do with the resolution. Rather it is to do with how compressed the image is. A 640x480 image at 100 quality will be noticably better looking than a 640x480 image at 50, but will use more storage space.
Try this instead:
jpg = mVideoControl.getSnapshot("encoding=jpeg&quality=100&width=2048&height=1536");