how to connect flutterflow with java code? - java

i wanted to create face recognition mobile app and made interface from flutterflow, and found code with face recognition on java, but i dont know how to connect them. i never code on java before and never used android studio. hope you'll help me. thanks!
i tried to find answer from everywhere, but i didnt found what i needed

Related

How do I connect my Unity3d Model to my WPILib Code for FRCJava?

I want to simulate my FRC robot code in Unity3d. For this, I watched the video by
Bill Kinahan on Youtube and was able to make the Unity3d Model. But I am having trouble getting to run and connect my Java code to it.
Link to Youtube Video: https://www.youtube.com/watch?v=Qryc2ck-AgY&t=2514s
Please I have been trying everything. Since I have no prior experience with Websockets I don't know how to actually use it nor do I know where to include it in the code. Any kind of help will be much appreciated. Thank You

How to store recognized faces so that they can be used when the app is closed and on different devices

Not sure if this is the right spot for this but I haven't been able to find anything useful online so any advice would be appreciated. Basically I'm working on a android facial recognition app that use firebase ml to detect faces and tensorflow to recognize faces. That all works fine but when I register a user I don't know how to store it on a server so that the face can be checked while on other devices or save the face to the original device so it doesn't need to be re-registered when the app is closed.
Ive looked for a while online and surprisingly couldn't find anything related to this. I have an idea in my head about a possible solution of saving a bitmap of the picture and then storing it locally and remotely on a SQL database but ideally if people with experience or a better understanding of this area could point me in the right direction that would be great!
This is my first project in the ML world and I am working off of this repository I found on github repository - https://github.com/estebanuri/face_recognition .
Its late here so I will answer any and all questions in the morning!

Sceneform: Render WebRTC media stream in to ExternalTexture

I am trying to develop an AR app to help visual impaired people to make better their access conditions to a computer.
I am investigating on how AR can help HCI for visual disabilities, so, the application is using WebRTC to get computer Desktop to be magnified at AR environment using Sceneform.
I have successfully used the Sceneform example https://github.com/google-ar/sceneform-android-sdk/tree/master/samples/chromakeyvideo, but, I have no idea of how to render the WebRTC stream directly to a ExternalTexture. -> https://github.com/judicapo/MUITSS-ARBE/tree/master/SampleApps/ARCK
I already tried some Stackoverflow answers, but, have not found the clue.
Thank you all for your replies, hope some one has any idea.
Honestly I have not worked with this. but I could think of a way on how to approach this.
Instead of chromaKey rendering on a texture, why don't you try a 'ViewRenderable'? using this, you can place any android View to a node. You just need to place a VideoView and do your webRTC magic. let me know if this works
https://developers.google.com/ar/reference/java/sceneform/reference/com/google/ar/sceneform/rendering/ViewRenderable
https://developers.google.com/ar/develop/java/sceneform/create-renderables
AugmentedImage example - https://proandroiddev.com/arcore-sceneform-simple-video-playback-3fe2f909bfbc
ViewRenderable example - https://github.com/Hariofspades/ARExperiments/blob/master/AugmentedImages/app/src/main/java/com/hariofspades/augmentedimages/common/AugmentedImageNode.java

Android webRTC VideoTrack.addRenderer api change

In all tutorials i have yet encountered the code looks like someVideoTrack.addRenderer(new VideoRenderer(someSurfaceViewRenderer));.
Yet, in the latest version of webRTC for android, the VideoRenderer-constructor looks like that:
public VideoRenderer(VideoRenderer.Callbacks callbacks);.
Thus, no SurfaceViewRenderer-parameter in sight.
Can anyone explain how to connect a SurfaceViewRenderer to a VideoTrack?
Thanks in regards.
I'm not knowledgeable enough to inform you as to why this works, but I managed to get my application working by replacing
someVideoTrack.addRenderer(new VideoRenderer(someSurfaceViewRenderer));
with
someVideoTrack.addSink(someSurfaceViewRenderer);
I managed to find my answer in this thread:
Local Video Renderer in Android WebRTC

Bluetooth Low Energy Discovery JAVA2SE

I'm trying to discover a BLE device with JAVA2SE, I searched a lot on google and I couldn't find any libraries or ideas about how I would be able to discover this device.
Android has bluegatt, so I assume there is a possibility to do this.
Have you guys any kind of idea to get around those problems.
Hope you guys will be able to help me !
Found solution in Python
Joseph Gremaud
One quick google search got me here:
https://github.com/movisens/SmartGattLib

Categories