I would like to know if it's possible to run some Python script containing matplotlib graphs (e.g 3D surfaces) on Android Studio.
If no, what could be another way I can do that?
Well, I don't know if you can run Python script inside the Android Studio, however you can create a Python API which listens on a specific endpoint and make a POST request to that API endpoint from code inside Android Studio. If you want to know how to create a service using Python, you can have a look at this link. You can also create an API using Flask/Django and deploy it as a service by making a systemd unit file (like it is mentioned here for CentOS).
Related
I was looking for a way to develop iOS apps with Java. Especially Java because I want to be able to use Processing as a Java library.
First I found RoboVM. Just to find out Microsoft did shut it down after they bought Xamarin.
Then I found Intel's Multi-OS Engine, which is a technical preview right now. It looks like you can develop an Android app just like you used to do with Java and Android Studio. Then you rewrite the UI (and probably some iOS specific API calls) and build it for iOS. Either on a Mac with Xcode or in Intel's build cloud (which seems to be free).
Using Processing in Android apps is not a new thing (even if it would be new to me). But it looks like with iOS apps it's different.
Since you have to rewrite the UI for iOS, I not sure if it's still possible to use Processing the same way.
If that's not possible I wonder if it would possible/a good idea to call loadPixels() at the end of the draw function, then read all the pixel values and write them to an iOS UI element.
Would it use up to much CPU power to do that every single frame or could this be a solution if there's no other way?
Of couse that would only give me UI output for processing. Somehow I still have to get touch events into processing if I want to handle those events there.
In jQuery I can not only register a callback for an event with $("#myButton").click(myFunction); but also simulate an event with $("#myButton").click();. When you call the click function without any arguments the event is triggered on that DOM element instead of registering a callback for that DOM element and that event.
Is there a way in Processing to do something like that?
If so, I could get touch events from Multi-OS Engine an then pass them to Processing.
You can think of Processing as actually being two things: it's a library, and it's a set of tools that handle exporting for you.
If you're using the Processing editor, then you're using the tools that handle exporting for you. You can deploy as a Java application, or as an Android app, or even as JavaScript through Processing.js. These tools take your Processing code and then converts it into the format needed to deploy your code.
However, you can also use Processing as a Java library, just like you would any other Java library. You do this by simply adding Processing's jars to your classpath, and then you can call Processing functions exactly like you can call any other library. If you do this, then you're in charge of writing your code and then deploying it. But it's certainly possible to use Processing as a Java library to draw to an image, and then draw that image to a native component.
Where it gets tricky is that you can't just write Java for iOS, so you can't just write code that uses Processing as a Java library. That's what RoboVM helped with. You might want to check out one of the alternatives mentioned in RoboVM's closing announcement:
Depending on where you are in the development of your apps, there are several options available to move forward, including tools that will help you port to Xamarin, and alternative Java SDKs which target iOS. In particular, libGDX has just announced their support for Intel’s Multi-OS Engine, which means there is an alternative for the majority of RoboVM’s active developers.
Another option you might consider is using Processing.js or p5.js to deploy as html and JavaScript. Then you could just visit your webpage on your phone's browser.
I am trying to develop a java app that will run on a Raspberry PI. Raspberry PI will be mounted on a vehicle and I will know my position through a gps device. To solve this, I’ve been thinking on a solution like this:
Use a Webview on my JavaFX app and use your javascript API to build a real-time turn by turn navigation app. However, I’ve seen that your web API is not as complet as mobile platforms APIs. My question is: Is what I am trying to do feasible using your APIs? If so, could you please give me a brief description how to do it?
Thanks!
The Javascript API is not a turn by turn API - that is currently something a bit too heavy for javascript to handle (it could be feasible but it's not commercially attractive right now).
In theory you could integrate directly with the C++ code of the SDK as that should be able to run on Linux (depends here on the gcc version used and the OpenGl support offered - send an email to dev#telenav.com with your scenario and they will advise you).
Or if you can run Android on the device then you can use directly the Android SDK.
Is it possible to run "python" script inside "Java" in an android app?
The main app will be Java but some cryptography should be done in "python"
Is it possible to do this?
Running a python script inside android app is not practical at the moment, but what you can do is creating a HTTP web service for interpreting python and sending back the results to the android application.
Then it's just Android app communicating with a HTTP web service which is simpler than packing an interpreter.
This way it makes the app lighter too.
I am creating a app for android using Kivy (python) that will send my location to a email. But kivy is not able to do this as GPS is not directly supported. This is what I have thought as my next step, but I need help.
Is there any CLI command that can give me my location. I will run that command using os.system(commad), the basic method for running commands in python.
Is there any executable available, like a jar file, or a javascript file, that I can run from my code and that gives me my current position using GPS.
Any readymade code of Kivy or Java, that I can test on my Phone.
Kivy uses python-for-android to compile APKs for you. It also provides pyjnius to wrap Android/Java API calls. This lets you access things like the Location API. The Plyer project is a cross platform layer that uses pyjnius for the Android part. There is a GPS example in Plyer which you can compile and use on Android if you have Kivy and Buildozer set up properly. Or you can read the source code of the GPS for Android part of Plyer to make your calls via pyjnius.
My problem is that I'm using parse.com to upload files, and I can't simply create a java project to do so, because it requires an Android context:
Parse.initialize(this, "", "");
I was wondering if there was a work around to access my computer's files through an Android emulator. I know the ACTUAL app technically wouldn't have any idea of my files, but the emulator is still running on my computer. Or is the emulator completely independent to near-perfectly imitate the real thing, and basically this would be impossible?
If the answer is no, what can I do aside from getting a phone and putting the files onto the phone?
You can use the Parse.com REST API to upload from anywhere.
The Parse API for Android is just a wrapper for this REST API.
There are also some existing Pure Java wrappers for the REST API by third parties. See the Java section of the Parse API Library page. Almonds in particular looks to give you what you want.