Kivy Android Camera API 2 - Camera Rotation - java

I am using the camera feature defined in the following repo but with some changes. I just want to rotate the SurfaceTexture defined in camera2.py so that the camera can also work in portrait mode.
I tried Push- and Pop- Matrix solution but it shadows the buttons in the camera. Hence, I want it to be solved on the Java side rather than the Kivy side.
This is the link to the repo:
https://github.com/inclement/colour-blind-camera
This is where the main problem lies in:
https://github.com/inclement/colour-blind-camera/blob/master/camera2/camera2.py
I do not add the whole snippet here since it is too long but it is basically somewhere around the following snippet (Line 263):
self.preview_resolution = resolution
self._prepare_preview_fbo(resolution)
self.preview_texture = Texture(
width=resolution[0], height=resolution[1], target=GL_TEXTURE_EXTERNAL_OES, colorfmt="rgba")
logger.info("Texture id is {}".format(self.preview_texture.id))
self.java_preview_surface_texture = SurfaceTexture(int(self.preview_texture.id))
self.java_preview_surface_texture.setDefaultBufferSize(*resolution)
self.java_preview_surface = Surface(self.java_preview_surface_texture)
Any help appreciated a lot!

Related

How to get high quality image from arcore

**I need high quality image with arcore. Currently I can extract the image but the armodel does not show.
I have tried to get the image from draw frame
**
First get some supported camera configs and select one fitting you. For example highest width*height
val bestConfig = session.getSupportedCameraConfigs(CameraConfigFilter(session)).maxByOrNull {
it.imageSize.width * it.imageSize.height
}
Then reconfigure ARCore session to use this camera configuration.
session.cameraConfig = bestConfig
Docs here: https://developers.google.com/ar/reference/java/com/google/ar/core/Session#setCameraConfig-cameraConfig

How to properly render external texture in Sceneform 1.16.0?

Previously there was a
nice article for rendering external texture
None of the code will work with Sceneform 1.16.0, as there are no .sfb, .sfm or .sfa formats.
New material seems to be in .matc format which is not human readable. How to create or modify a material in this version of Sceneform?
Using sceneform_camera_material.matc its possible to render a camera to background of Sceneform, but its very pixelated, irrespective of whatever camera preview resolution chosen. GLTF models looks great when loaded, issue is specific to external texture.
Is this some issue related to Linear filtering of textures or something to do with material settings of Google Filament?
If you use Sceneform 1.16 and want to create sceneform.rendering.Material,
(1)
You need to create your own matc file with Filmaent matc tool. You can download filament tool at https://github.com/google/filament/releases.
(2)
After you create your own matc file, put it in android raw directory, and call
com.google.ar.sceneform.rendering.Material.builder()
.setSource(context, R.raw.YOUR_MATC_FILE)
.build()
.thenAccept { material->
//Do something with created sceneform's Material
}
.exceptionally { throwable: Throwable? ->
}

Get the latest image from IP Camera in Java

I'm working on Java APP that will process the stream from the IP Camera (Milesight MS-C2682-P) located on Local network. It will detect objects and trigger actions depending on what's in the image (let´s say it will start an alarm, when a person is detected) - for that I need it to be with minimal delay.
I have an RTSP link "rtsp://username:password#ip_addr:rtsp_port/main", to access stream from my IP Camera, but in my JAVA app there is a 12 seconds delay (and it's increasing). This happens, when images are not handled fast enough, so they are buffered. There are "hacks" and "workarounds" (OpenCV VideoCapture lag due to the capture buffer), but I believe there has to be a prettier solution.
The other link I was able to get is an HTTP one, that uses also H.264 codec (can be used with MJPEG and MPEG4, if there is a possible way to use them effectively). "http://username:password#ip_addr:http_port/ipcam/mjpeg.cgi" - works like a charm.. in Python and browser. However, it doesn´t work in Java, an error is thrown:
OpenCV(4.2.0) C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\videoio\src\cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): HTTP_URL in function 'cv::icvExtractPattern'
Both links work smoothly in VLC.
So, the network is not a problem ('cause VLC handles stream with minimal delay) and Python using OpenCV is also doing a good job. It all comes down to Java implementation of OpenCV.. I guess.
Here is a Java code:
VideoPlayer videoPlayer = new VideoPlayer(); // My Class, just creates and updates JFrame, works like a charm with laptop's webcam, so certainly no issues here
Mat image = new Mat();
VideoCapture ipCamera = new VideoCapture(RTSP_URL);
// or the HTTP link
// VideoCapture ipCamera = new VideoCapture(HTTP_URL);
// verify if u got access to camera
if (!ipCamera.isOpened()) {
System.out.println("ERROR: Camera isn't working !!! ");
return;
}
System.out.println("OK: Connected to camera.");
while (true) {
ipCamera.read(image);
videoPlayer.updateVideo_MatImage(image);
}
And this is the Python code I'm using:
import cv2
cap = cv2.VideoCapture(RTSP_URL)
# or the HTTP link
# cap = cv2.VideoCapture(HTTP_URL)
while True:
ret, image = cap.read()
cv2.imshow("Test", image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
I just need to get the latest image, when a request is made. So I need to avoid any kind of buffering. It has to be implemented in Java since it's a requirement for this project.
So is there a way to get only latest image from camera?
What could cause the error mentioned above?
Thank you guys for any advice.

Adding EnviromentalReverb and PresetReverb to mediaPlayer (wav and m4a formats) but the reverb doesn't apply

I am working on a simple music app in android and I have tried adding EnviromentalReverb and PresetReverb to mediaPlayer (wav and m4a formats) but the reverb doesn't apply. There is no change when the audio plays. I have checked whether my device supports the reverb using the below code and it does. I have looked at similar questions on stackoverflow but there isn't an answer that works.
final AudioEffect.Descriptor[] effects = AudioEffect.queryEffects();
// Determine available/supported effects
for (final AudioEffect.Descriptor effect : effects) {
Log.d("Effects", effect.name.toString() + ", type: " + effect.type.toString());
}
The code used for EnvironmentalReverb and PresetReverb is below
First try
EnvironmentalReverb eReverb = new EnvironmentalReverb(1,0);
eReverb.setReverbDelay(85);
eReverb.setEnabled(true);
mMediaPlayer.attachAuxEffect(eReverb.getId());
mMediaPlayer.setAuxEffectSendLevel(1.0f);
Second try
PresetReverb mReverb = new PresetReverb(1, 0);
mReverb.setPreset(PresetReverb.PRESET_LARGEROOM);
mReverb.setEnabled(true);
mMediaPlayer.attachAuxEffect(mReverb.getId());
mMediaPlayer.setAuxEffectSendLevel(1.0f);
Both return 0 for setEnabled(true) but neither work on the audio. Can someone please point me in the right direction? I am not sure what is wrong with the implementation.
Answering my question so it can be helpful for someone else.
I wasn't able to get the PresetReverb to work. The EnvironmentalReverb however was working but to find out whether it was working I had to add seekbars for room level and reverb level so I could alter it in real time.
EnvironmentalReverb eReverb = new EnvironmentalReverb(0,0);
mMediaPlayer.attachAuxEffect(eReverb.getId());
mMediaPlayer.setAuxEffectSendLevel(1.0f);
I enabled the reverb on click of a button and then used seek bars to change the room level and reverb level.

Take Screenshot of Android screen and save to SD card

There are a few questions here on SO about capturing screenshots of an android application. However, I haven't found a solid solution on how to take a screenshot programatically using the android SDK or any other method.
So I thought I would ask this question again in the hopes that I can find a good solution, hopefully one that will allow capturing full length images that I can save to the SD card or somewhere similar.
I appreicate any help
This is not possible directly on the device/emulator, unless it is rooted.
to honest all I need it for is the emulator as this is for a testing application on a PC
This sounds like a job for monkeyrunner.
monkeyrunner tool can do the job for you with bit of adb command, [python script]
from com.android.monkeyrunner import MonkeyRunner, MonkeyDevice
//waits for connection
device = MonkeyRunner.waitForConnection()
//take the current snapshot
device.takeSnapshot()
//stores the current snapshot in current dir in pc
device.writeToFile('current.png')\
//copy it to the sd card of device
os.subprocess.call('adb push current.png /sdcard/android/com.test.myapp/current.png')
Note: call this jython script file
monkeyrunner.bat <file name>
You will most likely not be happy with this answer, but the only ones that I have seen involve using native code, or executing native commands.
Edit:
I hadn't seen this one before. Have you tried it?:
http://code.google.com/p/android-screenshot-library/
Edit2: Checked that library, and it also is a bad solution. Requires that you start the service from a pc. So my initial answer still holds :)
Edit3: You should be able to save a view as an image by doing something similar to this. You might need to tweek it a bit so that you get the width/height of the view. (I'm inflating layouts, and specify the width/height when I layout the code)
View content = getView();
Bitmap bitmap = Bitmap.createBitmap(width, height, Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
File file = new File(pathAndFilename);
file.createNewFile();
FileOutputStream ostream = new FileOutputStream(file);
bitmap.compress(CompressFormat.PNG, 100, ostream);
ostream.close();
You can look at http://codaset.com/jens-riboe/droidatscreen/wiki (with a write up at http://blog.ribomation.com/2010/01/droidscreen/): this is a Java library that uses adb to capture a screen shots. I've been able to (with a lot of elbow grease) modify the source to let me automatically capture a timed series of screen shots (which I use for demo videos).
You can see the class structure at http://pastebin.com/hX5rQsSR
EDIT: You'd invoke it (after bundling all the requirements) like this:
java -cp DroidScreen.jar --adb "" --device "" --prefix "" --interval

Categories