How to get high quality image from arcore - java

**I need high quality image with arcore. Currently I can extract the image but the armodel does not show.
I have tried to get the image from draw frame
**

First get some supported camera configs and select one fitting you. For example highest width*height
val bestConfig = session.getSupportedCameraConfigs(CameraConfigFilter(session)).maxByOrNull {
it.imageSize.width * it.imageSize.height
}
Then reconfigure ARCore session to use this camera configuration.
session.cameraConfig = bestConfig
Docs here: https://developers.google.com/ar/reference/java/com/google/ar/core/Session#setCameraConfig-cameraConfig

Related

Kivy Android Camera API 2 - Camera Rotation

I am using the camera feature defined in the following repo but with some changes. I just want to rotate the SurfaceTexture defined in camera2.py so that the camera can also work in portrait mode.
I tried Push- and Pop- Matrix solution but it shadows the buttons in the camera. Hence, I want it to be solved on the Java side rather than the Kivy side.
This is the link to the repo:
https://github.com/inclement/colour-blind-camera
This is where the main problem lies in:
https://github.com/inclement/colour-blind-camera/blob/master/camera2/camera2.py
I do not add the whole snippet here since it is too long but it is basically somewhere around the following snippet (Line 263):
self.preview_resolution = resolution
self._prepare_preview_fbo(resolution)
self.preview_texture = Texture(
width=resolution[0], height=resolution[1], target=GL_TEXTURE_EXTERNAL_OES, colorfmt="rgba")
logger.info("Texture id is {}".format(self.preview_texture.id))
self.java_preview_surface_texture = SurfaceTexture(int(self.preview_texture.id))
self.java_preview_surface_texture.setDefaultBufferSize(*resolution)
self.java_preview_surface = Surface(self.java_preview_surface_texture)
Any help appreciated a lot!

Video Transcode with Android MediaCodec

Struggling with Android MediaCodec, I'm looking for a straight forward process to change the resolution of a video file in Android.
For now I'm trying a single thread transcoding method that makes all the work step by step so I can understand it well, and at high level it looks as follows:
public void TranscodeVideo()
{
// Extract
MediaTrack[] tracks = ExtractTracks(InputPath);
// Decode
MediaTrack videoTrack = tracks.Where(o => o.IsVideo).FirstOrDefault();
MediaTrack rawVideoTrack = DecodeTrack(videoTrack);
// Edit?
// ResizeVideoTrack(rawVideoTrack);
// Encode
MediaFormat newFormat = MediaHelper.CreateVideoOutputFormat(videoTrack.Format);
MediaTrack encodeVideodTrack = EncodeTrack(rawVideoTrack , newFormat);
// Muxe
encodeVideodTrack.Index = videoTrack.Index;
tracks[Array.IndexOf(tracks, videoTrack)] = encodeVideodTrack;
MuxeTracks(OutputPath, tracks);
}
Extraction works fine, returning a track with audio only and a track with video only. Muxing works fine combining again two previous tracks. Decoding works but I don't know how to check it, the raw frames on the track weight much more than the originals so I assume that it's right.
Problem
The encoder input buffer size is smaller than the raw frames size, and also related to the encoding configured format, so I assume that I need to resize the frames in some way but I don't find anything useful. I'm correct on this? I'm missing something? What is the way to go resizing Raw video frames? Any help? :S
PD
Maybe you will notice that I'm using C# (Xamarin.Android) for more fun. But the underlaying API is of course Java.
I'm using ByteBuffers, not Surfaces because it seems easier. I will be the next step using surfaces, any advice is welcome.
I know that the single thread process is highly inefficient, but makes it simple. It will be another next step to connect the decoder output buffer to the encoder input buffer.
I digged through PhilLab, Grafika and Bigflake examples but nothing seems to be very useful for me.
Avoiding to use ffmpeg on Android.
Thank you everyone for your time.
Going off of the comment above to implement libVLC
Add this to your app root's build.gradle
allprojects {
repositories {
...
maven {
url 'https://jitpack.io'
}
}
}
Add this to your dependent app's build.gradle
dependancies {
...
implementation 'com.github.masterwok:libvlc-android-sdk:3.0.13'
}
Here is an example of loading an RTSP stream as an activity
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.camera_stream_layout);
// Get URL
this.rtspUrl = getIntent().getExtras().getString(RTSP_URL);
Log.d(TAG, "Playing back " + rtspUrl);
this.mSurface = findViewById(R.id.camera_surface);
this.holder = this.mSurface.getHolder();
ArrayList<String> options = new ArrayList<>();
options.add("-vvv"); // verbosity
//Add vlc transcoder options here
this.libvlc = new LibVLC(getApplicationContext(), options);
this.holder.setKeepScreenOn(true);
//this.holder.setFixedSize();
// Create media player
this.mMediaPlayer = new MediaPlayer(this.libvlc);
this.mMediaPlayer.setEventListener(this.mPlayerListener);
// Set up video output
final IVLCVout vout = this.mMediaPlayer.getVLCVout();
vout.setVideoView(this.mSurface);
//Set size of video to fit app screen
DisplayMetrics displayMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
ViewGroup.LayoutParams videoParams = this.mSurface.getLayoutParams();
videoParams.width = displayMetrics.widthPixels;
videoParams.height = displayMetrics.heightPixels;
vout.setWindowSize(videoParams.width, videoParams.height);
vout.addCallback(this);
vout.attachViews();
final Media m = new Media(this.libvlc, Uri.parse(this.rtspUrl));
//Use this to add transcoder options m.addOption("vlc transcode options here");
this.mMediaPlayer.setMedia(m);
this.mMediaPlayer.play();
}
Here is the documentation of vlc transcoder options
https://wiki.videolan.org/Documentation:Streaming_HowTo_New/
You are right, the input buffer size of the encoder is smaller because it expects input to be of the specified dimensions. The encoder only, like the name suggests, encodes.
I read your question as more of a "why" than a "how" question so i'll only point you to where you'll find the "why's"
The decoded frame is a YUV image (is suggest to quickly skim through the wikipedia article), usually NV21 if i'm not mistaken but might be different from device to device. To do this i suggest you use a library as every every plane of the image needs to be scaled down differently and it usually takes care of filtering.Check out libYUV. If you are interested in the actual resizing algorithms check out this and for implementations this.
If you are not required to handle the decoding and encoding with bytebuffers, i suggest to use a surface as you already mentioned. It has multiple benefits over decoding to bytebuffers.
More memory efficient as there is no copy between the native buffer and app allocated buffer, the native buffers are simply geting swapped from and to the surface.
If you plan to render the frame, be it for resizing or displaying, it can be done by the devices graphic processor. On how to do that check out BigFlakes DecodeEditEncode test.
In hope this answers some of your questions.

Android copy built-in video recording quality and framerate using camera2

The image quality and the framerate I get when using the camera2 API does not match the one I get when I manually record a video using the camera app to a file.
I am trying to do real-time image processing using OpenCV on Android. I have manually recorded a video using the built-in camera application and everything worked perfectly: the image quality was good, the framerate was a stable 30 FPS.
My min SDK version is 22, so I am using the camera2 API's repeating requests. I have set it up, together with an ImageReader and the YUV_420_888 format. I have tried both the PREVIEW and the RECORD capture request templates, tried manually setting 18 capture request parameters in the builder (eg. disabling auto-white-balance, setting the color correction mode to fast), but the FPS was still around 8-9 and the image quality was poor as well. Another phone yielded the same results, despite its max. FPS being 16.67 (instead of 30).
The culprit is not my image processing (which happens in another thread, except for reading the image's buffer): I checked the FPS when I don't do anything with the frame (I didn't even display the image), it was still around 8-9.
You can see the relevant code for that here:
//constructor:
HandlerThread thread = new HandlerThread("MyApp:CameraCallbacks", Process.THREAD_PRIORITY_MORE_FAVORABLE);
thread.start();
captureCallbackHandler = new Handler(thread.getLooper());
//some UI event:
cameraManager.openCamera(cameraId, new CameraStateCallback()), null);
//CameraStateCallback#onOpened:
//size is 1280x720, same as the manually captured video's
imageReader = ImageReader.newInstance(size.getWidth(), size.getHeight(), ImageFormat.YUV_420_888, 1);
imageReader.setOnImageAvailableListener(new ImageAvailableListener(), captureCallbackHandler);
camera.createCaptureSession(Collections.singletonList(imageReader.getSurface()), new CaptureStateCallback(), captureCallbackHandler);
//CaptureStateCallback#onConfigured:
CaptureRequest.Builder builder = activeCamera.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(imageReader.getSurface());
//setting the FPS range has no effect: this phone only has one option
session.setRepeatingRequest(builder.build(), null, captureCallbackHandler);
//ImageAvailableListener#onImageAvailable:
long current = System.nanoTime();
deltaTime += (current - last - deltaTime) * 0.1;
Log.d("MyApp", "onImageAvailable FPS: " + (1000000000 / deltaTime));
//prints around 8.7
last = current;
try (Image image = reader.acquireLatestImage()) { }
On Samsung Galaxy J3 (2016), doing Camera.Parameters#setRecordingHint(true) (while using the deprecated camera API) achieves exactly what I wanted: the video quality and the framerate becomes the same as the built-in video recorder's. Unfortunately, it also means that I was unable to modify the resolution, and setting that hint did not achieve this same effect on a Doogee X5 MAX.

is it right to use batcher.drawSprite to draw all images in game?

I'm developing a game and use batcher.drawSprite method to draw all images in the game (background and all characters)
in assets.java :
charAtlas = new Texture(game, "charAtlas.png");
charEnemy = new TextureRegion(charAtlas, 0,0,250,300);
in worldGame.java :
batcher.beginBatch(Assets.charAtlas); // set atlas
batcher.drawSprite(130, 628, 120,140, Assets.charEnemy);
//assets.charEnemy
is it right to use this method in all condition ?
I have 3 atlas in game , i even use 2048x2048 atlas size so i can include all my images in there..
However, the image looks blurry in game (Tested in galaxy note, tab, and galaxy young). looks at the code above, i even have the enemy char take size in my atlas as much as 250x300 , it's not make sense that it'll look blurry as i only draw it in 120x140.
note : i use no layout (i mean no layout file in res folder) .. i use drawsprite to draw all image (Character,menu, button, etc)..
update :
I tried to use character image files from other game that i unzipped, when i run the app, it also looks blurry and jagged. while in the original game, it's so smooth and sharp. why is that ?
check your code, you might use scaledbitmap , see if you can set like this.
Options options = new BitmapFactory.Options();
options.inScaled = false;
Bitmap source = BitmapFactory.decodeResource(a.getResources(), path, options);

Is it possible to get a higher resolution from getDrawingCache?

I am using SubsamplingScaleImageView by Dave Morrissey (https://github.com/davemorrissey/subsampling-scale-image-view) to allow users to crop and pan a photo with gestures.
I modified the library to add a tint and a logo to the photo. Now I need to upload the photo to the server. In order to do that I need to somehow extract the photo from the SubsamplingScaleImageView.
I added the following method to the SubsamplingScaleImageView class:
/**
* Capture a photo of the image view
*/
public Bitmap getOutput() {
buildDrawingCache();
Bitmap b1 = getDrawingCache();
Bitmap b = b1.copy(Config.ARGB_8888, true);
destroyDrawingCache();
return b;
}
I am using this method to get the file, resize it to a specific resolution (800x800), and save it to my app's folder.
The problem I noticed is that the extracted photo from the drawing cache depends on the resolution of the device. For example, on my Full HD device I get 1080x1080 photo, which is enough, but on some lower res devices I get resolutions like 480x480 and that is not enough since the image needs to be bigger than that, so the photo gets blurry when I resize it to 800x800.
Is there a way to get the same resolution photo from that image view on all devices?

Categories