I was looking for a way to develop iOS apps with Java. Especially Java because I want to be able to use Processing as a Java library.
First I found RoboVM. Just to find out Microsoft did shut it down after they bought Xamarin.
Then I found Intel's Multi-OS Engine, which is a technical preview right now. It looks like you can develop an Android app just like you used to do with Java and Android Studio. Then you rewrite the UI (and probably some iOS specific API calls) and build it for iOS. Either on a Mac with Xcode or in Intel's build cloud (which seems to be free).
Using Processing in Android apps is not a new thing (even if it would be new to me). But it looks like with iOS apps it's different.
Since you have to rewrite the UI for iOS, I not sure if it's still possible to use Processing the same way.
If that's not possible I wonder if it would possible/a good idea to call loadPixels() at the end of the draw function, then read all the pixel values and write them to an iOS UI element.
Would it use up to much CPU power to do that every single frame or could this be a solution if there's no other way?
Of couse that would only give me UI output for processing. Somehow I still have to get touch events into processing if I want to handle those events there.
In jQuery I can not only register a callback for an event with $("#myButton").click(myFunction); but also simulate an event with $("#myButton").click();. When you call the click function without any arguments the event is triggered on that DOM element instead of registering a callback for that DOM element and that event.
Is there a way in Processing to do something like that?
If so, I could get touch events from Multi-OS Engine an then pass them to Processing.
You can think of Processing as actually being two things: it's a library, and it's a set of tools that handle exporting for you.
If you're using the Processing editor, then you're using the tools that handle exporting for you. You can deploy as a Java application, or as an Android app, or even as JavaScript through Processing.js. These tools take your Processing code and then converts it into the format needed to deploy your code.
However, you can also use Processing as a Java library, just like you would any other Java library. You do this by simply adding Processing's jars to your classpath, and then you can call Processing functions exactly like you can call any other library. If you do this, then you're in charge of writing your code and then deploying it. But it's certainly possible to use Processing as a Java library to draw to an image, and then draw that image to a native component.
Where it gets tricky is that you can't just write Java for iOS, so you can't just write code that uses Processing as a Java library. That's what RoboVM helped with. You might want to check out one of the alternatives mentioned in RoboVM's closing announcement:
Depending on where you are in the development of your apps, there are several options available to move forward, including tools that will help you port to Xamarin, and alternative Java SDKs which target iOS. In particular, libGDX has just announced their support for Intel’s Multi-OS Engine, which means there is an alternative for the majority of RoboVM’s active developers.
Another option you might consider is using Processing.js or p5.js to deploy as html and JavaScript. Then you could just visit your webpage on your phone's browser.
Related
I am trying to develop a java app that will run on a Raspberry PI. Raspberry PI will be mounted on a vehicle and I will know my position through a gps device. To solve this, I’ve been thinking on a solution like this:
Use a Webview on my JavaFX app and use your javascript API to build a real-time turn by turn navigation app. However, I’ve seen that your web API is not as complet as mobile platforms APIs. My question is: Is what I am trying to do feasible using your APIs? If so, could you please give me a brief description how to do it?
Thanks!
The Javascript API is not a turn by turn API - that is currently something a bit too heavy for javascript to handle (it could be feasible but it's not commercially attractive right now).
In theory you could integrate directly with the C++ code of the SDK as that should be able to run on Linux (depends here on the gcc version used and the OpenGl support offered - send an email to dev#telenav.com with your scenario and they will advise you).
Or if you can run Android on the device then you can use directly the Android SDK.
Today I was speaking with PM. He said that the best way to solve problem "The same app working on iOS and Android" is to write object-c code for iOS and then use the same code in Android app (https://developer.android.com/tools/sdk/ndk/index.html). That approach (in his opinion) will give us DRY effect (have one code to maintain).
I was so shocked that I almost didn't say anything to it. But after some time I think about it and found some problems:
C++ != Object-C. It is possible to add o-c code as c++ code in Android?
(Let say that #1 is possible) How can I do layouts, activities, ect. in c++ for Android?
Also when we should / should not use Android NDK?
The short answer is no it's not possible. However http://www.apportable.com
Claims to enable you to compile your iOS app for Android - thus enabling you to use all of the code in your iPhone app.
However it doesn't work with everyframe work but does have hooks in to the Android SDK so you can still access those components. Worth looking at, and having a play with. I have but only half-heartly and you'd have to build the iOS app from the outset with the plan to use http://www.apportable.com as like I said it doesn't currently support all ios sdks and you'd need to work around that.
But that should answer your question.
As of 2016 app portable is no longer an option more info it appears Google killed the dream.
Objective C code will not compile with the NDK. But check out http://www.apportable.com/ it's a library that allows you to write code for Android in Objective-C. That could be what your boss was talking about.
C++ < Objective-c. Objective-c is built on top of a C compiler ... Just name the files with *.mm and write c/c++ code. Basically what you could do is write functions that you want to share across platforms in c++ and use them in an Android project via JNI wrappers.
You can not share code which uses ios system Frameworks (UIKit, CFNetwork, ...) directly.
If you want to write code for whole apps once, you could give it a shot with apportable.com, like others have pointed out.
You can use the GNUstep Android toolchain to use model code based on Foundation and CoreFoundation in an Android app, and then write a new UI layer that interacts with the Objective C model via native NDK calls on top of that (e.g. in Android Studio).
I've worked with Android in the past, but haven't done anything super-advanced or what I'm about to describe so need some guidelines as to what the best approach/method is to do this before I proceed.
I'm not entirely sure how to google this, so it's best to explain.
I want to build an Android library project preferably with the source undisclosed. I read this can be done as follows: Create another jar that the Android library project references. However, not sure if all of the source code can be private. If anyone can point me somewhere, that would be great.
Asides from that, the library needs to expose an API for any Android app to use, and some sort of event mechanism to broadcast an event when certain events happen (e.g when the app is in foreground etc).
A scenario would be:
1) User loads the app which has the library embedded
2) The embedded library detects that the app has loaded and 'sends an event' to the app
3) The app captures the event and does some stuff specific to the app + an API call to the library
I guess what I'm interested mostly is figuring out what the best ways are to capture the callbacks by the app, once the library has sent some event to the app and to reduce the burden on the developer having to spend too much time implementing what needs to be done when certain events are captured.
Hope this makes sense.
The Three20 project is really nice for building iPhone apps quickly using common libraries:
https://github.com/facebook/three20
Is there anything like this for Android?
Not exactly, but working at a company with a partially three20 based iPhone app developed in parallel with the Android version, I think about 50% of what 320 does you get right out of the platform on Android, minus a little polish. For example, 320's Navigator and TextEditor are basically baked in on Android - the platform's native text editing components can stretch dynamically on their own, and task navigation and back-button history is handled automatically on Android, with URL handling baked in to the intent filter and resolution system.
You can get much of the rest of 320's functionality out of reusable libraries like ignition or GreenDroid (at least with regards to caching and images loading in lists), without the weight and lock-in a fairly monolithic framework like 320 can add to your app. There's a few bits that these solutions miss (three20's zoomable photo viewer, for instance), but there's usually acceptable hackarounds for quick usage (an Android WebView makes a pretty decent image viewer substitute, for instance).
Once upon a time there was an SO wiki page gathering a bunch of those resources, but alas, that's gone away. You can get a pretty good set by looking for popular Android projects on GitHub or Google Code, though.
See Do android developers commonly use 3rd-party UI/networking libraries like Three20 on iPhone?
Processing has Android support and it seems to be pretty awesome from my 10 minutes of playing with it. But I would like to make a regular (nongraphics) application like a twitter feed reader or something. So is there something like Processing that can do regular apps? Besides Titanium...
Basically I am looking for anything that will make coding for android easier, processing was so easy to get working that I was very happy with it, but it is for graphics only. Titanium didn't give me the same wow factor and it isn't open so that kind of takes away from it. What other tools are out there?
I'm going to give you the answer you are looking for and some advice.
Processing can do ANY of the things you are thinking about doing. If you want textboxes etc, you can use the Control P5 library. It's great. If you are an expert at Processing and just want to port over your Processing code to android, Processing for android is great.
But that's not what you want to do. You want to write an application. And you want to write it on Android. There are frameworks designed to give you a leg up in writing cross-platform mobile apps, but nothing is going to make writing an android application easier than learning Java and learning how the android stack works. It's actually really well designed and easy to follow once you start grokking "intents" and "bundles".
At the end of the day, you might even want to scale back a little further. Are you trying to write an application that needs to be used without internet access or that uses super special phone APIs? If you aren't, maybe you should try just writing your app as an html5 css3 website.
You can do plenty of input based stuff with processing. The original mouse events work as specified, except pass touches, but you can also access things like pressure and multiple fingers down. The hardware keys are also supported.