How does Codename One work for device specific features? - java

Maybe I'm just having a hard time wrapping my head around it. I kind of understand how it works from the research I've done, especially this question: How does Codename One work?
But what if I want to intercept incoming texts in Android? How does that affect the iOS app? If I want to use Vimeo's API to upload videos (I have an Android app that does it), will I have to get the source code and add that separately?

Incoming texts can't be intercepted in iOS as far as I know.
For Android you can use intents to intercept incoming texts but that's a bit of a pain you would need to write Android native code for that which you can do with native interfaces in Codename One.
I'm not very familiar with the Vimeo API but if its a REST API then you can pretty much map to it with Codename One's networking API using NetworkManager, ConnectionRequest etc.

Related

Codename One: How to stream live video to YouTube Live

I'm developing an app which I need to stream video captured from the Smartphone's video camera (on iPhones and Android phones) directly to YouTube Live.
I looked into Codename One's Capture.captureVideo(ActionListener response) method which must wait for the video to be stopped, the file to be saved, and then the ActionListener is called. Obviously, this can't work work because the video has to be streamed to an output stream (to an URL given by YouTube Live API) on a continuous basis. Is there any other way to accomplish this? (What about any unofficial API, like method to override, to get the input stream from the camera?) If not, would Codename One consider providing this feature for a version upgrade as the market trend seems to be moving on live video streaming apps?
If it cannot be done with Codename One's API, then the only way is to write native code for Android and iOS. I've read the article integrating native API and using Freshdesk API as an example, so any pointers on how to integrate YouTube API for the purpose of streaming live video?
https://developers.google.com/youtube/v3/live/getting-started
https://developers.google.com/youtube/v3/live/libraries
https://developers.google.com/api-client-library/java/apis/youtube/v3
https://developers.google.com/youtube/v3/live/code_samples/
I don't see a REST API within the list of API's although there is a JavaScript API which you might use to implement this. Alternatively you can use something like was done with the freshdesk API. You will need to embed the native view from the live broadcast, you can look at the implementation of Google Maps to see how we embedded that native widget.

Processing on iOS with Intel's Multi-OS Engine

I was looking for a way to develop iOS apps with Java. Especially Java because I want to be able to use Processing as a Java library.
First I found RoboVM. Just to find out Microsoft did shut it down after they bought Xamarin.
Then I found Intel's Multi-OS Engine, which is a technical preview right now. It looks like you can develop an Android app just like you used to do with Java and Android Studio. Then you rewrite the UI (and probably some iOS specific API calls) and build it for iOS. Either on a Mac with Xcode or in Intel's build cloud (which seems to be free).
Using Processing in Android apps is not a new thing (even if it would be new to me). But it looks like with iOS apps it's different.
Since you have to rewrite the UI for iOS, I not sure if it's still possible to use Processing the same way.
If that's not possible I wonder if it would possible/a good idea to call loadPixels() at the end of the draw function, then read all the pixel values and write them to an iOS UI element.
Would it use up to much CPU power to do that every single frame or could this be a solution if there's no other way?
Of couse that would only give me UI output for processing. Somehow I still have to get touch events into processing if I want to handle those events there.
In jQuery I can not only register a callback for an event with $("#myButton").click(myFunction); but also simulate an event with $("#myButton").click();. When you call the click function without any arguments the event is triggered on that DOM element instead of registering a callback for that DOM element and that event.
Is there a way in Processing to do something like that?
If so, I could get touch events from Multi-OS Engine an then pass them to Processing.
You can think of Processing as actually being two things: it's a library, and it's a set of tools that handle exporting for you.
If you're using the Processing editor, then you're using the tools that handle exporting for you. You can deploy as a Java application, or as an Android app, or even as JavaScript through Processing.js. These tools take your Processing code and then converts it into the format needed to deploy your code.
However, you can also use Processing as a Java library, just like you would any other Java library. You do this by simply adding Processing's jars to your classpath, and then you can call Processing functions exactly like you can call any other library. If you do this, then you're in charge of writing your code and then deploying it. But it's certainly possible to use Processing as a Java library to draw to an image, and then draw that image to a native component.
Where it gets tricky is that you can't just write Java for iOS, so you can't just write code that uses Processing as a Java library. That's what RoboVM helped with. You might want to check out one of the alternatives mentioned in RoboVM's closing announcement:
Depending on where you are in the development of your apps, there are several options available to move forward, including tools that will help you port to Xamarin, and alternative Java SDKs which target iOS. In particular, libGDX has just announced their support for Intel’s Multi-OS Engine, which means there is an alternative for the majority of RoboVM’s active developers.
Another option you might consider is using Processing.js or p5.js to deploy as html and JavaScript. Then you could just visit your webpage on your phone's browser.

Is is possible to write app for iOS and move object-c code into Android?

Today I was speaking with PM. He said that the best way to solve problem "The same app working on iOS and Android" is to write object-c code for iOS and then use the same code in Android app (https://developer.android.com/tools/sdk/ndk/index.html). That approach (in his opinion) will give us DRY effect (have one code to maintain).
I was so shocked that I almost didn't say anything to it. But after some time I think about it and found some problems:
C++ != Object-C. It is possible to add o-c code as c++ code in Android?
(Let say that #1 is possible) How can I do layouts, activities, ect. in c++ for Android?
Also when we should / should not use Android NDK?
The short answer is no it's not possible. However http://www.apportable.com
Claims to enable you to compile your iOS app for Android - thus enabling you to use all of the code in your iPhone app.
However it doesn't work with everyframe work but does have hooks in to the Android SDK so you can still access those components. Worth looking at, and having a play with. I have but only half-heartly and you'd have to build the iOS app from the outset with the plan to use http://www.apportable.com as like I said it doesn't currently support all ios sdks and you'd need to work around that.
But that should answer your question.
As of 2016 app portable is no longer an option more info it appears Google killed the dream.
Objective C code will not compile with the NDK. But check out http://www.apportable.com/ it's a library that allows you to write code for Android in Objective-C. That could be what your boss was talking about.
C++ < Objective-c. Objective-c is built on top of a C compiler ... Just name the files with *.mm and write c/c++ code. Basically what you could do is write functions that you want to share across platforms in c++ and use them in an Android project via JNI wrappers.
You can not share code which uses ios system Frameworks (UIKit, CFNetwork, ...) directly.
If you want to write code for whole apps once, you could give it a shot with apportable.com, like others have pointed out.
You can use the GNUstep Android toolchain to use model code based on Foundation and CoreFoundation in an Android app, and then write a new UI layer that interacts with the Objective C model via native NDK calls on top of that (e.g. in Android Studio).

Android - library/app communication

I've worked with Android in the past, but haven't done anything super-advanced or what I'm about to describe so need some guidelines as to what the best approach/method is to do this before I proceed.
I'm not entirely sure how to google this, so it's best to explain.
I want to build an Android library project preferably with the source undisclosed. I read this can be done as follows: Create another jar that the Android library project references. However, not sure if all of the source code can be private. If anyone can point me somewhere, that would be great.
Asides from that, the library needs to expose an API for any Android app to use, and some sort of event mechanism to broadcast an event when certain events happen (e.g when the app is in foreground etc).
A scenario would be:
1) User loads the app which has the library embedded
2) The embedded library detects that the app has loaded and 'sends an event' to the app
3) The app captures the event and does some stuff specific to the app + an API call to the library
I guess what I'm interested mostly is figuring out what the best ways are to capture the callbacks by the app, once the library has sent some event to the app and to reduce the burden on the developer having to spend too much time implementing what needs to be done when certain events are captured.
Hope this makes sense.

Processing for Android and regular input apps

Processing has Android support and it seems to be pretty awesome from my 10 minutes of playing with it. But I would like to make a regular (nongraphics) application like a twitter feed reader or something. So is there something like Processing that can do regular apps? Besides Titanium...
Basically I am looking for anything that will make coding for android easier, processing was so easy to get working that I was very happy with it, but it is for graphics only. Titanium didn't give me the same wow factor and it isn't open so that kind of takes away from it. What other tools are out there?
I'm going to give you the answer you are looking for and some advice.
Processing can do ANY of the things you are thinking about doing. If you want textboxes etc, you can use the Control P5 library. It's great. If you are an expert at Processing and just want to port over your Processing code to android, Processing for android is great.
But that's not what you want to do. You want to write an application. And you want to write it on Android. There are frameworks designed to give you a leg up in writing cross-platform mobile apps, but nothing is going to make writing an android application easier than learning Java and learning how the android stack works. It's actually really well designed and easy to follow once you start grokking "intents" and "bundles".
At the end of the day, you might even want to scale back a little further. Are you trying to write an application that needs to be used without internet access or that uses super special phone APIs? If you aren't, maybe you should try just writing your app as an html5 css3 website.
You can do plenty of input based stuff with processing. The original mouse events work as specified, except pass touches, but you can also access things like pressure and multiple fingers down. The hardware keys are also supported.

Categories