DJI Phantom 3 Custom Mission App, Delay Between Mission Steps: - java

I am working on developing an app using Android Studio which pilots the DJI Phantom 3 drone in a certain pattern, taking pictures at certain way points. I uploaded the DJI Sample Code to Android Studio, entered an app key on the Android Manifest.xml file, and modified the "CustomMissionView" Code in the "MissionManager" directory in order to program the drone to fly in a specifed pattern. However, when I run this project on the DJI Simulator, there is a delay between each of the "steps" of the custom mission, sometimes the drone is idle and hovers for a few seconds without doing anything. I want to know if there is any way to minimize the delay between steps of the custom mission without setting flight speed. I suspect it has something to do with the DJICommonCallbacks.DJICompletionCallback(), but I am not sure. I am a newbie to Android Studio, so any advice would be helpful.
Here is some of the code inside the protected method DJI Mission in the "CustomMissionView" Java file
LinkedList<DJIMissionStep> steps = new LinkedList<DJIMissionStep>();
//Step 1: takeoff from the ground
steps.add(new DJITakeoffStep(new DJICommonCallbacks.DJICompletionCallback() {
public void onResult(DJIError error) {
Utils.setResultToToast(mContext, "Takeoff step: " + (error == null ? "Success" : error.getDescription()));
}
}));
//Step 2: reset the gimbal to desired angle
steps.add(new DJIGimbalAttitudeStep(
DJIGimbalRotateAngleMode.AbsoluteAngle,
new DJIGimbalAngleRotation(true, -30f, DJIGimbalRotateDirection.Clockwise),
null,
null,
new DJICommonCallbacks.DJICompletionCallback() {
public void onResult(DJIError error) {
Utils.setResultToToast(mContext, "Set gimbal attitude step: " + (error == null ? "Success" : error.getDescription()));
}
}));
//Step 3: Go 3 meters from home point
steps.add(new DJIGoToStep(mHomeLatitude, mHomeLongitude, 3, new DJICommonCallbacks.DJICompletionCallback() {
public void onResult(DJIError error) {
Utils.setResultToToast(mContext, "Goto step: " + (error == null ? "Success" : error.getDescription()));
}
}));

The pause between each step is due to how DJI set up the custom mission. When you prepare a custom mission, it does not send any mission information to the aircraft itself. It does build the custom mission on the device running the app. During execution of the mission, a step is sent to the aircraft. When that step has successfully completed, the next step is sent to the aircraft. This causes the pause between each step. If the signal from the remote control to the aircraft becomes weak, the mission can fail from timing out.
Waypoint missions do not have this pause because the entire mission is loaded onto the aircraft when it is prepared.

Related

Minestom (Minecraft) water bucket placing

I am creating a Minecraft server using minestom which is a server building library, it had so no code and you have to make everything yourself. So im trying to make it so players can place water but it doesnt work sometimes. If im falling and place it then it gets placed client side but not server side sometimes, when its placed server side it says "placed block" in the chat.
globalEventHandler.addListener(PlayerUseItemOnBlockEvent.class, event -> {
final Player player = event.getPlayer();
if (event.getItemStack().getMaterial() != Material.WATER_BUCKET) {
return;
}
if (player.getInstance().getBlock(new Vec(event.getPosition().x(),
event.getPosition().y(),
event.getPosition().z())) == Block.IRON_BLOCK
&& event.getBlockFace().normalY() == 1) {
Point placedPos = event.getPosition();
placedPos.withX(placedPos.x() + event.getBlockFace().normalX());
placedPos.withY(placedPos.y() + event.getBlockFace().normalY());
placedPos.withZ(placedPos.z() + event.getBlockFace().normalZ());
player.getInstance().setBlock(placedPos, Block.WATER);
player.sendMessage("placed water");
}
player.getInventory().update();
});
Video - Ignore the platform disappearing, bug that I know how to fix but havent just yet but that also only happens when the water is placed server side too
https://youtu.be/njH58gbXPlE
I believe on the look vector is the next tick’s look vector for placing water, but the server hasn’t gotten this new look vector so it uses the old one

Android sampling rates variation of hardware Sensors on Nexus 6P

I'm developing an Android app, for a research, and im reading several Sensor data like accelerometer, gyroscope, barometer etc.
So I have 4 Nexus 6P devices all with the newest Factory Image and freshly set up with no other app installed than the standard once which are pre-installed.
So the Problem that occurs now is that one of the phones is constantly lagging behind, so for example i record for half an hour the accelerometer at 105 Hz (so the max possible rate for the accelerometer is 400Hz), just to make sure i get at least about the amount of samples i would expect for 100Hz and the results are the following:
Smapling for Half an hour at 100Hz -> 180000 Samples
Smapling for Half an hour at 105Hz -> 189000 Samples
(This is now just an example for the accelerometer but is the same for every other sensor on each device. So device 1,3,4 get about the same good results for other senors while device 2 gets the same bad results on all other sensors).
Device 1: 180000 Samples
Device 2: 177273 Samples <- the phone that is lagging behind
Device 3: 181800 Samples
Device 4: 179412 Samples
So the problem is at device number 2 where I'm missing almost 3000 Samples (I know this is crying at a high level) and my guess for this problem is that it is probably Hardware related. That it might be a performance issue i can probably rule out since it does not matter how many Sensors im reading and also reading them at 400Hz works as expected (if wanted i can also offer the Samples for this too). I also tried to set the sampling rate to 400Hz so to the fastest and then take recordings according to the timestamp which led to the same result.
So just in case I'll provide how I register the Sensor Listener:
protected void onCreate(Bundle savedInstanceState){
sensorManager = (SensorManager) getSystemService(SENSOR_SERVICE);
unaccDataSensor = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER_UNCALIBRATED);
}
....
private void Start(){
sensorManager.registerListener(unaccDataListener, unaccDataSensor, 10000);
}
So what i want is to get at least about the amount of samples that i should expect so above is no problem and just a bit below is also acceptable.
So if anyone has an idea about what else I can try or what can cause the problem i would be really thankful.
This is my first Post so if anything is missing or if i explained something in a bad way im sorry and i try my best to fix it.
I work with Android sensors a lot, and i can tell you the hardware is of variable quality. I usually use a filter if I need the results to be consistent across phones:
// Filter to remove readings that come too often
if (TS < LAST_TS_ACC + 100) {
//Log.d(TAG, "onSensorChanged: skipping");
return;
}
however this means you can only set the phones to match the lowest common denominator. If it helps I find that getting any more than 25hz is overkill for most applications, even medical ones.
It can also help to make sure any file writes you are doing are done off thread, and in batches, as writing to file is an expensive operation.
accelBuffer = new StringBuilder();
accelBuffer.append(LAST_TS_ACC + "," + event.values[0] + "," + event.values[1] + "," + event.values[2] + "\n");
if((accelBuffer.length() > 500000) && (writingAccelToFile == false) ){
writingAccelToFile = true;
AccelFile = new File(path2 +"/Acc/" + LAST_TS_ACC +"_Service.txt");
Log.d(TAG, "onSensorChanged: accelfile created at : " + AccelFile.getPath());
File parent = AccelFile.getParentFile();
if(!parent.exists() && !parent.mkdirs()){
throw new IllegalStateException("Couldn't create directory: " + parent);
}
//Try threading to take of UI thread
new Thread(new Runnable() {
#Override
public void run() {
//Log.d(TAG, "onSensorChanged: in accelbuffer");
// Log.d(TAG, "run: in runnable");
//writeToStream(accelBuffer);
writeStringBuilderToFile(AccelFile, accelBuffer);
accelBuffer.setLength(0);
writingAccelToFile = false;
}
}).start();
}
Doing all of the above has got me reasonably good results, but it will never be perfect due to differences in the hardware.
Good luck!

Detecting the device presence in a Pocket

My application needs to know whether the phone is in the pocket or in hand based on which few parameters are set specific to individual than to move on to perform next tasks.
I have read various blogs and also SensorManager android developer but none helped me out. The only related link I found on stack is this with no solution, Though one comment on that question suggests using Awareness API. I am going through it, my understanding is that the User Activity is the context to find this- I may be wrong. There maybe someone worked or may be doing R&D on this, please share your observation that may help me out in some way to go further.
Is there any way to find is the phone in pocket or not?If yes, Can somebody tell me How do One do that?
Any guidance/links to the concepts are helpful.
Thanks.
I implemented this in my project. I got readings from the Light sensor, Accelerometer and Proximity sensor. Keep in mind that it approximately detects device presence in a pocket.
Getting the current parameteres from the sensors (accelerometer, proximity and light sensors):
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
g = new float[3];
g = event.values.clone();
double norm_Of_g = Math.sqrt(g[0] * g[0] + g[1] * g[1] + g[2] * g[2]);
g[0] = (float)(g[0] / norm_Of_g);
g[1] = (float)(g[1] / norm_Of_g);
g[2] = (float)(g[2] / norm_Of_g);
inclination = (int) Math.round(Math.toDegrees(Math.acos(g[2])));
accReading.setText("XYZ: " + round(g[0]) + ", " + round(g[1]) + ", " + round(g[2]) + " inc: " + inclination);
}
if (event.sensor.getType() == Sensor.TYPE_PROXIMITY) {
proximityReading.setText("Proximity Sensor Reading:" + String.valueOf(event.values[0]));
rp = event.values[0];
}
if (event.sensor.getType() == Sensor.TYPE_LIGHT) {
lightReading.setText("LIGHT: " + event.values[0]);
rl = event.values[0];
}
if ((rp != -1) && (rl != -1) && (inclination != -1)) {
main.detect(rp, rl, g, inclination);
}
}
Then based on this data I decide whether or not the device is in a pocket:
public void detect(float prox, float light, float g[], int inc){
if((prox<1)&&(light<2)&&(g[1]<-0.6)&&( (inc>75)||(inc<100))){
pocket=1;
//IN POCKET
}
if((prox>=1)&&(light>=2)&&(g[1]>=-0.7)){
if(pocket==1){
playSound();
pocket=0;
}
//OUT OF POCKET
}
}
Keep in mind that it's not fully accurate.
Code: https://github.com/IvanLudvig/PocketSword
Blog post: https://ivanludvig.github.io/blog/2019/06/21/detecting-device-in-a-pocket-android.html
The only way we can come somewhat near to the solution is using.Google Awareness API wont solve the problem as it has a entirely different usage.
Light sensor(Environment sensor)
Proximity sensor(Position sensor)
The Android platform provides four sensors that let you monitor various environmental properties. You can use these sensors to monitor
relative ambient humidity
luminescence
ambient pressure
ambient temperature
All four environment sensors are hardware-based and are available only if a device manufacturer has built them into a device. With the exception of the light sensor, which most device manufacturers use to control screen brightness, environment sensors are not always available on devices. Because of this, it's particularly important that you verify at run time whether an environment sensor exists before you attempt to acquire data from it.
Light sensor can be used to calculate the light intensity.For example many mobile phones having Auto brightness mode function, this function work on light sensor that will adjust screen brightness as per light intensity.
There are many unites such as Lux,candela,lumen etc, to measure light intensity.
Considering this there will be considerable difference in light intensity when you phone in in pocket or outside pocket.
Although the same will happen for the case when you are operating phone is dark room. or at those place where the light intensity is quite low. hence to distinguish among such cases is the real challenge.You can use other environments sensor in combination of light sensor to come to an effective outcome.But i assume an accurate solution is dicey.
To study more about these sensors kindly refer to following links
https://developer.android.com/guide/topics/sensors/sensors_environment.html
https://developer.android.com/guide/topics/sensors/sensors_position.html
Google awareness API wont work for this case. as provides entirely different solution.
It provides two API
Fence API
Snapshot API
You can use the Snapshot API to get information about the user's current environment. Using the Snapshot API, you can access a variety of context signals:
Detected user activity, such as walking or driving.
Nearby beacons that you have registered.
Headphone state (plugged in or not)
Location, including latitude and longitude.
Place where the user is currently located.
Weather conditions in the user's current location.
Using the Fence API, you can define fences based on context signals such as:
The user's current location (lat/lng)
The user's current activity (walking, driving, etc.).
Device-specific conditions, such as whether the headphones are
plugged in.
Proximity to nearby beacons.
For a cross-platform solution, you can now use the NumberEight SDK for this task.
It performs a wide variety of context recognition tasks on both iOS and Android including:
Real-time physical activity detection
Device position detection (i.e. presence in pocket)
Motion detection
Reachability
Local weather
It can also record user context for reports and analysis via the online portal.
How to detect whether a phone is in a pocket:
For example, to record user activity in Kotlin, you would do:
val ne = NumberEight()
ne.onDevicePositionUpdated { glimpse ->
if (glimpse.mostProbable.state == State.InPocket) {
Log.d("MyApp", "Phone is in a pocket!")
}
}
or in Java:
NumberEight ne = new NumberEight();
ne.onDevicePositionUpdated(
new NumberEight.SubscriptionCallback<NEDevicePosition>() {
#Override
public void onUpdated(#NonNull Glimpse<NEDevicePosition> glimpse) {
if (glimpse.mostProbable.state == State.InPocket) {
Log.d("MyApp", "Phone is in a pocket!");
}
}
});
Here are some iOS and Android example projects.
Disclosure: I'm one of the developers.

Android4OpenCV: setting resolution at startup

I'm using Android4OpenCV to do some live image processing, and I'd like to use the smallest resolution the camera can offer. The default resolution is the largest the camera can offer.
I'm looking at the 3rd example, which allows the user to change resolutions via a menu. I'd like to modify that example to change the resolution at startup instead of requiring the user go through the menu. To do that, I simply add two lines to the otherwise empty onCameraViewStarted() function:
public void onCameraViewStarted(int width, int height) {
android.hardware.Camera.Size res = mOpenCvCameraView.getResolutionList().get(mOpenCvCameraView.getResolutionList().size()-1);
mOpenCvCameraView.setResolution(res);
}
And the thing is, this works perfectly fine on my Galaxy Nexus, running Android 4.2.2. The app starts up, and the resolution is set correctly.
However, when I run the exact same app on a Nexus 7 tablet, running Android 5.1, the app hangs on the call to setResolution(). Actually it works okay one time, but then hangs the second time you try to run it- even if you completely exit the app, remove it from the running apps, or restart the device. Other users are reporting the same error as well, so it's not just the Nexus 7 device- in fact, my Galaxy Nexus seems to be the only device where this works.
Specifically, the application goes into the setResolution() function, which then calls org.opencv.android.JavaCameraView.disconnectCamera(), which looks like this:
(Note: this code is internal to the OpenCV4Android library, this is not my code)
protected void disconnectCamera() {
/* 1. We need to stop thread which updating the frames
* 2. Stop camera and release it
*/
Log.d(TAG, "Disconnecting from camera");
try {
mStopThread = true;
Log.d(TAG, "Notify thread");
synchronized (this) {
this.notify();
}
Log.d(TAG, "Wating for thread");
if (mThread != null)
mThread.join();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
mThread = null;
}
/* Now release camera */
releaseCamera();
}
Looking at the logs, I can see that the thread gets stuck on the synchronized(this) line. The only other thing that synchronizes on that Object is the inner JavaCameraView.CameraWorker class, which is the mThread variable in the above code, started by the JavaCameraView class:
(Note: this code is internal to the OpenCV4Android library, this is not my code)
private class CameraWorker implements Runnable {
public void run() {
do {
synchronized (JavaCameraView.this) {
try {
JavaCameraView.this.wait();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
if (!mStopThread) {
if (!mFrameChain[mChainIdx].empty())
deliverAndDrawFrame(mCameraFrame[mChainIdx]);
mChainIdx = 1 - mChainIdx;
}
} while (!mStopThread);
Log.d(TAG, "Finish processing thread");
}
}
I've tried futzing with that code, changing the notify() to notifyAll(), and maintaining a List of CameraWorker threads and joining each one. But no matter what, the app still hangs at the disconnectCamera() call.
My questions are:
How can I modify the third OpenCV4Android example so that its resolution is set at startup?
What is causing the app to hang?
Why does this work on some devices but not others?
Edit: I haven't received any comments or answers, so I've crossposted to the OpenCV forums here.
Edit 2: As per cyriel's suggestion, I've tried setting the resolution after several frames have gone by:
int frames = 0;
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
frames++;
if(frames == 6){
android.hardware.Camera.Size res = mOpenCvCameraView.getResolutionList().get(mOpenCvCameraView.getResolutionList().size()-1);
mOpenCvCameraView.setResolution(res);
}
return inputFrame.rgba();
}
However, now this gets stuck in the same exact place, even on my Galaxy Nexus, which works if I set the resolution in the onCameraViewStarted() function. I've tried increasing the frame count to 7 and even 100, but I always get stuck in the same place.
The most important question in your situation is whether it's working if you don't modify the code at all - are you able to change the resolution (via menu) without crashing the app?
If yes than the answer most likely is simple - it's the same bug in OpenCV as in Windows version: before changing camera resolution or fps (and most likely any property) you need to grab at least one (use 3-5 to be sure) frame before changing this property.
If no than most likely there is nothing you can do on your own - fill the bug report and wait for comments. The only alternative is to use other library to grab frames from camera and than convert it to OpenCV object.

Capturing video from multiple usb cams and showing in a UI side by side using JAVA + Java applet

I have to make 2 applets which will run in a TOMCAT like server and when I access the webpage[HTML page] at a client side, I have 2 cameras attached to that client PC and I want to show the videos from both cameras on the 2 web pages at the client side at the same time.
I have tried using JMF. Out put is
It doesnt work simultaneously for both cameras in most machines. It works for one camera capture at a time
It works on some machines, but you have to select the cameras everytime you open the web pages. Select camera 1 for the first applet and camera 2 for the second applet.
Is there a way with/without JMF that I can open 2 webpages on one client PC with 2 applets for the same running on a remote server and show the videos from each USBCAM on each page?
I have used this while working with JMF.
private void StartStreaming()
{
String mediaFile = "vfw:Micrsoft WDM Image Capture (Win32):0";
try
{
MediaLocator mlr = new MediaLocator(mediaFile);
_player = Manager.createRealizedPlayer(mlr);
if (_player.getVisualComponent() != null)
{
setSize(480, 320);
jpnVideoStream.add("South", _player.getVisualComponent());
}
}
catch (Exception e)
{
System.err.println("Got exception " + e);
}
_player.start();
}
This is what is present in my both applets. But as I said, most of the times, it starts one CAM and then gives the device is in use and cannot capture message.
Please suggest any solution.
The Problem is that you are trying to use the same webcam in both the applets.
Instead use :
String mediaFile = "webcam 1" in applet 1
String mediaFile = "webcam 2" in applet 2
Your first webcam is : vfw:Micrsoft WDM Image Capture (Win32):0
You can check your second webcam by :using JMStudio.
select File->Preferences->Capture Devices and then click on Detect Capture devices.
This can also be done using code but the above one is simpler. Still I m listing the code :
Vector list = CaptureDeviceManager.getDeviceList(null);
int i;
CaptureDeviceInfo tempDevice;
// List all the devices ...
if( list!=null) {
if( list.size() == 0)
{
System.out.println("the device list is zero : ");
System.exit(1);
}
System.out.println("The devices are : ");
for( i=0;i< list.size() ;i++ ) {
tempDevice = (CaptureDeviceInfo) list.elementAt(i);
System.out.println(tempDevice.getName());
}
}
NOTE : Try Running the code as admin if it dosent work.
If I recall correctly then in your code (JMF implementation), there should be list/array of devices (resources) java is trying to read the data (webcam stream) from. My guess would be, that you need to change code in such a way that if resource one is busy, then try to read from resource two. Essentially you are going over entire list of resources trying to read whatever is available to you.
Hope that helps.
It may work with JavaCV http://code.google.com/p/javacv/

Categories