I'm coding my first android app, and my app has a video recording feature. So I did some research to get the "stuff" ready and run and managed to look into the google sample on github:
https://github.com/googlesamples/android-Camera2Video
So I took a deep dive into the code and yeah quite much just to make a video..
So I do have some restrictions to my camera feature and that is that it runs smoothly with some stable constant 25 Frames per Seconds...
And what I've seen so far there are some setters from the MediaRecorder class (see https://developer.android.com/reference/android/media/MediaRecorder.html)
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setOutputFile(mNextVideoAbsolutePath);
mMediaRecorder.setVideoEncodingBitRate(10000000);
mMediaRecorder.setCaptureRate(25);
mMediaRecorder.setVideoFrameRate(25);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
to set the frame rate, but hence those give a damn about my 25 fps - so generally the output "is like" 29,36 fps or 29,54 fps
I do did some resarch looked into my device support fps range and the P30 Pro does support 25 fps and the pixel 3 XL not. I don't even understand why there are such device differences (maybe someone can explain?).
So Camera2 API does provide so much things that there is a CaptureRequest.Builder and so I did some googling and set my captureRequestBuilder with those additional lines
closePreviewSession();
setUpMediaRecorder();
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, new Range<Integer>(25,25));
mPreviewBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
mPreviewBuilder.set(CaptureRequest.CONTROL_AE_LOCK, false);
But nothing has changed... Is there somebody outhere who has managed to get a stable constant frame rate with the camera2 api?
I hope there is! Thank you if you have read so far!
EDIT Possible duplicates:
Camera2 MediaRecorder changes Frame Rate on Galaxy S9
How to use android camera2 api to record 60 fps video with fixed exposure time
EDIT For the sake of completeness:
Ranges from P30 Pro:
P30 Pro ranges
Ranges from Pixel 3 XL:
Pixel 3 XL ranges
Related
I am using a quickstart firebase MLKIT implementation with this link->
https://github.com/ankitjamuar/android-firebase-mlkit
but at the point when I found an image from the camera of MLKIT , it's resolution is 768*1024 .
but when I click the picture from my device native camera it gives me an 8 MP , i.e. 3264*2448 resolution image. so somehow I want to use this greater resolution image In MLKIT, which my device can capture as it's full capacity.so how can I increase MLKIT camera picture quality?
because when I get the face from the MLKIT camera picture, it also has low picture quality and I get lost my accuracy.One more thing here I am using the front camera for all.
please help, I got stuck , if possible I can get the accuracy for the face more reliable.
check StillImageActivity.java
check line 352 and 353
targetWidth = isLandScape ? 1024 : 768;
targetHeight = isLandScape ? 768 : 1024;
change the resolution as per your requirement.
Let me know if this solves your issue
Please check the official ML Kit Quickstart. The quickstart you link to is a pretty old fork.
I am new to computer vision but I am trying to code an android app which does the following:
Get the live camera preview and try to detect one logo in that (i have the logo in my resources). In real-time. Draw a rect around the logo if found. If there is no match, dont draw the rectangle.
I already tried a couple of things including template-matching and feature detection using ORB.
Why that didnt work:
Template-matching:
Issues with scaling and rotation. I tried a multi scale variant of it but a) the performance was really bad and b) the rectangle was of course always shown trying to search for the image. There was no way to actually confirm in the code if the logo was found or not.
ORB feature detection:
Also pretty slow (5-6 fps) but it worked ok-ish. The other problem was that also i never could be sure if the logo was in the picture or not. ORB found random matches even if the logo was not in the picture.
Like I said, I am very new to this. I would appreciate the help on what would be the best way to achieve:
Confirm if a picture A (around 200x200 pixels) is in ROI of camera picture (around 600x600 pixels).
This shouldnt take longer than 50ms per frame. I dont know if thats even possible though. So if a correct way to do this would take a bit longer than that, I would just do the work in a seperate thread and only analyze like every fifth camera frame or so.
Would appreciate any hints or code examples on how to achieve that. Thank you!
With logo detection, I would highly recommend using OpenCV HaarClassifier. It is easy to generate training samples from a collection of images of the logo, or one logo image with many distortions.
If you can use a few rules like the minimum and maximum size of the logo to be detected, and possible regions on the image where it can appear, you can run the detector at a speed better than you mention with ORB.
I have a game developed natively for Android, and now my users also want an iOS version. I thought LibGDX would be the better choice because it'll let me reuse Java code from the game, and also I already have some experience with it.
In my game I have different image sizes for different device densities (in drawable-hdpi, drawable-xhdpi and so on).
So, my question is: how can I achieve the same, but using LibGDX (also taking care of the new densities required by iOS device resolutions, if any change is required)?
Thank you.
Yes you can achieve the same, but it wont be automatic like on Android unless you write some native code as well. I have found that the best way to manage it is simply to do it yourself:
1) When your app starts you can get the screen size and density using Gdx.graphics.getHeight(), getWidth(), Gdx.graphics.getDensity()
2) Depending on the size and density you can change the location path to the correct folder where your assets should be loaded from.
3) Now when any asset loading code is run make sure that it uses your pre-set path from the step above, so that you get the correct assets for that display size/density.
Most of the time you can use the largest image and use `Viewports' to handle resolution and aspect ratio for you. The larger images will be scaled down and this will result in some loss of detail of course.
Viewports will automatically scale the size you want to show of your game world to the screen it displays it. For example FitViewport(100, 100) will create a viewport that shows 100 x 100 "game units". If you would play this on a 1920 x 1080 device it will scale that 100 x 100 game world to a 1080 x 1080 area and leave an empty bar of 840 x 1080.
The size of the game world has nothing to do with pixels. You could create a enemy with the size of 0.5f x 0.5f world units and give that a texture of 256 x 256 pixels. Your viewport scales this for you to the correct size.
Unless you want a pixel perfect game this should be good enough. On some bigger screens but low resolutions devices you might get some minor artefacts due to filtering, setting the filtering for your textures Texture.setFilter(TextureFilter.Nearest, TextureFilter.Linear) might fix some.
All I ever think about when designing graphics are the pixels in my art should represent roughly or at least 1 screen pixel. Usually I just draw pixel perfect for HD and it looks fine on a 800 x 480 screen. If you want to squeeze out a bit more performance you could use MipMaps, I think TexturePacker generates them automatically with the right Filter settings but I have no experience with them.
This can be done using
com.badlogic.gdx.assets.loaders.resolvers.ResolutionFileResolver.
Here is javadoc for it.
I’m very new to Android programming and the one thing that really has me confused relates to screen density and screen dimensions. I’ve read plenty of replies to other questions on here and I’ve read the Google docs on how to program for multiple screen sizes. None have really helped address either the problem or my own general ignorance. I hope it is okay to ask this here so somebody might finally explain it simply enough so that I’ll be able to wrap my brain around this problem.
First of all, I’ve been working with SurfaceViews onto which I’m throwing bitmaps. I’ve been primarily programming for the Samsung Note 10.1 (2014) edition. The screen is 2048x1536 and returns a screen density of 2.0 when I query the display. My approach has been to make graphics that work at those dimensions but within the code, I’ve used the oft-quoted formula to convert floating point dp coordinates into pixels, ready for the moment I move to other devices.
px = (dp * density) + 0.5f
I’ve now been trying to get the app working on a Samsung S2. The screen is 480 by 800.On the phone, the app is (I assume correctly) loading graphics from the HDPI folder because the pixel density is 1.5.
My first problem was that the graphics in the HDPI were originally far too big. I’d used the Resize program to quickly resize my original XHDPI folder. Perhaps I simply didn’t select the correct source setting but the resulting graphics where far bigger than the actual 480x800 graphic I finally found filled the screen.
However, that was only a symptom of my larger confusion.
When developing an app using bitmaps, is there some magic formula I’ve missed which allows dp values to be translated to pixels or should I be doing calculations based on the actual screen dimensions? By the formular, 100dp is approximately 150px on the (1.5 density) 800px wide screen but 200px on the bigger (2.0 density) 2560 display. That’s 18% horizontally across the S2’s screen but only 8% across the wider screen on the Note 10.1.
I naively assumed that a dp value would translate across all devices and simply put things in the right place or do I have that wrong? Just writing this up makes me even more convinced that I misunderstood what dp values are. I was confused by the suggestion of working to a theoretical Google device with a pixel density of 1 and then adapting everything based on other pixel densities or screen sizes.
Simply to say, as I keep hearing, work in dp unites so everything is uniform hasn’t quite worked for me so I’m now seeking the advice of wiser council. In other words: please help!
Thanks.
i have a problem in android development that bored me. my problem is screen size and dealing with that. specially i have some problems with images. for example i want to create a background image for my activity that i created in photoshop and my background image contains a "HELLO" word on it. but when i put it on drawable-xhdpi folder, it seems blurry and its not sharp!! my phone is a nexus 4 and according to Google documentation i create background image in 640 x 480 size.
when i create background image in 960 x 720 size it seems better but not perfect. in this case my image file size is very high!
but what is the standard way for this? please help me to solve this problem for ever. i read google documentation but its not solve my problem!
http://developer.android.com/guide/practices/screens_support.html
You should usually avoid creating images for certain screen sizes to make them background, because there are thousands of different devices and you would have to create dozens of such images.
The first thing you need to be aware of is screen density.
Generally you create 3 to 5 images when not even looking at screen size: low (120 dpi), medium (160 dpi), high (240 dpi), extra high (320 dpi) and 2*extra high (480 dpi). These go into drawable-Xdpi folders, where X is one of l, m, h, xh, xxh.
Next thing when you want to have bigger images on bigger screens (bigger phones, small and big tablets), you may want to put images to folders like drawable-sw600dp-Xdpi. This is not a case for your phone.
Nexus 4 is a xhdpi 640x384 dp device, but you should not treat it differently than Samsung Galaxy S2 (hdpi 533x320 dp).
Create an image of smaller size for both phones and center it horizontally. E.g. 320x100 px for mdpi, 480x150 px for hdpi and 640x200 px for xhdpi (your phone).
the screen resolution for Nexus is 1280x768 (http://www.google.com/nexus/4/specs/), resize the image to this resolution. In especial consideration some images can't handle the resolution and the image became disproportionately.
for interesting
resolution calculator:
http://members.ping.de/~sven/dpi.html
This is problem of Android Fragmentation and you just cannot deal with it perfectly as there is a several hundreds different devices. As colleague above wrote Nexus 4 has resolution -1280 x 768 so for sure res of image as equal as 960 x 720 is good choice. I'm even surprised that google suggest 640 x 480 for xhdpi, it's definitely too less.
So as I said you are not able to make perfect looking graphics for all existing devices. You should choose the most popular devices from every screen category(xhdpi,mdpi,ldpi ... etc) to cover the most important market share.
With 1600+ android models even after they are categorized in few Screen size and a few DPI's its very difficult to manage layouts.. i suggest that you just concentrate on designing layouts w.r.t to screen size and then create views as Resizeable Views to neglect density effects.
Once you have created your layouts Resize the Views .. You can create a Custom View or resize on its onMeasure();