Unity: use native objectiveC and Java - java

I am working on a Unity Project that will be exported to ios and android.
I have a native objectiveC and Android Java code that consists of 2 views (basically a login flow). Both have same logic. What I need to do is the following:
When the app runs, it will run the Native views and code (objectiveC or Java depending on the platform).
Then the user will go through the flow, and once it is completed, I will need to report back to the unity code, and open a scene from it.
Any indications on how can that be achieved, without having to add them after generation of each code?
Because otherwise I would have to put my native code in, every time I generate a new project.
I did some research and came across the "extern" capability, but can it be used to open views from storyboards or android xml files? and trigger logics?
Thank you

Related

What options are there to call Lua code within an android project?

I want to take this open-source project, which is a build calculator for the online game Path of Exile, and port it to an android app. The project, called Path of Building (PoB), is written fully in Lua, and is released as a windows application. I'm not sure to what extent it's possible to simply wrap their lua code and just show it in an activity, but, since the project gets updated often (just as much as the game, which is once every three months), I'd like to touch as little of their code as possible and hopefully just have to set it up in Android. Any help or input is appreciated.
You can run Lua code on Android using a library called luaj.
Take a look at luaj site: http://www.luaj.org/luaj/3.0/README.html

Run Java Android application within Xamarin.Android app

I'm working on a Xamarin.Forms app in C# for iOS and Android. I also have the source code for another app that's written for Android devices in Java. This other app is a fully developed app on its own and has its own pages and UI designed already. Would it be possible to run this app within my project, at least just for Android?
For example, let's say I have a button in my app that, when clicked, leads to a new page within the app that starts running the Java app. From there, it's as if users are simply using the Java app, only that there's a back button on the top of the page that leads the users back to the main page of the Xamarin.Android app.
I've looked into Binding a Java Library but it seems like that's primarily for Java libraries rather than full-on applications.
From what I could tell you can look into creating a AAR file of the existing Java code:
https://developer.android.com/studio/projects/android-library
Then add the new AAR (with the Java code & resources) to your existing Xamarin project:
https://learn.microsoft.com/en-us/xamarin/android/platform/binding-java-library/binding-an-aar
Using Intents on both sides:
https://developer.android.com/guide/components/intents-filters
https://developer.xamarin.com/api/type/Android.Content.Intent/
I would start with a simple Proof of Concept:
write a simple Xamarian app with a button to launch the Java Android
Activity via intent
write a Android app with a button to launch the Xamarian activity via an intent (the back stack for the integrated app may be tricky).
Build an AAR out of the Java Android side.
Integrate the AAR into the Xamerian app.

React Native - Navigating from a React Native screen to a native Java (Android Studio) screen

I have a React Native project integrated with an Android Studio java project (meaning part of my application works with java and part with react native), and I need to be able to navigate from one of my react native screens to one of the java screens. I've got the navigation inside the react native part all covered up (I'm using the react-navigation package), but I can't get my head around navigating to a java screen.
So far, I've looked everywhere on the Internet and the package's documentation, but it seems there's simply no information about this, or maybe I missed something. Either way, can anyone point me in the right direction on how to navigate between react native and native screens?
React native allows communication between JAVA and JS. To achieve what you want to do you will need to call a method from JS to JAVA where you can use intents to open a new screen. It's too much to explain and I probably wouldn't do it justice but you can go through the docs.
Native Modules Android

Processing on iOS with Intel's Multi-OS Engine

I was looking for a way to develop iOS apps with Java. Especially Java because I want to be able to use Processing as a Java library.
First I found RoboVM. Just to find out Microsoft did shut it down after they bought Xamarin.
Then I found Intel's Multi-OS Engine, which is a technical preview right now. It looks like you can develop an Android app just like you used to do with Java and Android Studio. Then you rewrite the UI (and probably some iOS specific API calls) and build it for iOS. Either on a Mac with Xcode or in Intel's build cloud (which seems to be free).
Using Processing in Android apps is not a new thing (even if it would be new to me). But it looks like with iOS apps it's different.
Since you have to rewrite the UI for iOS, I not sure if it's still possible to use Processing the same way.
If that's not possible I wonder if it would possible/a good idea to call loadPixels() at the end of the draw function, then read all the pixel values and write them to an iOS UI element.
Would it use up to much CPU power to do that every single frame or could this be a solution if there's no other way?
Of couse that would only give me UI output for processing. Somehow I still have to get touch events into processing if I want to handle those events there.
In jQuery I can not only register a callback for an event with $("#myButton").click(myFunction); but also simulate an event with $("#myButton").click();. When you call the click function without any arguments the event is triggered on that DOM element instead of registering a callback for that DOM element and that event.
Is there a way in Processing to do something like that?
If so, I could get touch events from Multi-OS Engine an then pass them to Processing.
You can think of Processing as actually being two things: it's a library, and it's a set of tools that handle exporting for you.
If you're using the Processing editor, then you're using the tools that handle exporting for you. You can deploy as a Java application, or as an Android app, or even as JavaScript through Processing.js. These tools take your Processing code and then converts it into the format needed to deploy your code.
However, you can also use Processing as a Java library, just like you would any other Java library. You do this by simply adding Processing's jars to your classpath, and then you can call Processing functions exactly like you can call any other library. If you do this, then you're in charge of writing your code and then deploying it. But it's certainly possible to use Processing as a Java library to draw to an image, and then draw that image to a native component.
Where it gets tricky is that you can't just write Java for iOS, so you can't just write code that uses Processing as a Java library. That's what RoboVM helped with. You might want to check out one of the alternatives mentioned in RoboVM's closing announcement:
Depending on where you are in the development of your apps, there are several options available to move forward, including tools that will help you port to Xamarin, and alternative Java SDKs which target iOS. In particular, libGDX has just announced their support for Intel’s Multi-OS Engine, which means there is an alternative for the majority of RoboVM’s active developers.
Another option you might consider is using Processing.js or p5.js to deploy as html and JavaScript. Then you could just visit your webpage on your phone's browser.

How do "add-ons" work for native apps?

I am designing an app for Android, iPhone/iPad and Windows Phone using GWT and PhoneGap. GWT will allow me to write the entire app in Java (my strong suit, unlike JavaScript or CSS) and will translate it into cross-browser JavaScript/AJAX. PhoneGap will then wrap that resultant JavaScript and turn it into a native app for each of the three platforms mentioned above.
I would like to have a "plugin-oriented architecture", whereby users can optionally purchase (or qualify for through other means) "add-ons" (plugins/extensions) that will enhance the functionality of the app.
Normally I would accomplish this by using the Java Simple Plugin Framework (JSPF) and allow users to download plugin JARs as they paid for them. These plugin JARs would then be added to the classpath of the main app so that the next time it starts and scans the classpath for plugins, it finds them and loads them.
Is this possible with native apps? I don't believe I can deploy anything other than APK, IPA and XAP (Android, iPhone and WinPhone respectively) files to these marketplaces.
In Java-land, this would be like having to download a "base" app in the form of an executable JAR (containing its own main method), and then having to download a "plugin" app that is also an exectuable JAR, and somehow get the two to behave like a normal plugin architecture (which would be if you have 1 exectuable JAR base app and then 1+ plugin non-executable JAR libs).
So I ask: how do add-ons work for native apps from a deployment/download perspective? How do you get 2 or more APKs/IPAs/XAPs to communicate with each other on the client-side? If not possible, how do native app developers handle add-ons (I know they exist, I've seen them!)? Thanks in advance!
as for iPhone - the only way is to implement the additional features and have them disabled until the user purchases in app to unlock the feature.
In Android you can have apps communicate with each other so that the user can just load add-on apps that provide only the add-on functionality. For more detail on this.. android communication between two applications
WP8 - I do not yet have experience
Although I am not a core Android Developer, here are some suggestions from my experience with Android till now:
One possible solution for Android is to use Updates for application. One way to achieve this is using a Background Service which checks for updates whenever app starts or use GCM (Google Cloud Messaging) to push update messages to the app. Second option is much better as you can provide instant update to the user ,on per device basis (a user can have multiple devices), as soon as he/she has bought your add on feature. After the user has got the update message you can download the whole app with new feature and update app on the device. Of course in this case you need to backup the existing data of the app and restore after installation of updated app.
I don't know if this thing can be done but it would be also be an good option in my opinion if you can provide a legitimate user, who have bought the particular add on, an updated app through Android Market.
Another way is to create each add on as a Service and then let the main app can detect (or bind) those services at startup and if they are available, your app can communicate with them easily. You can even call the UI portion of the newly deployed add on from your main app by using Intents and BroadcastReceivers.
You may also find this useful as far as Android is concerned:
However, there are ways for an application to share data with other
applications and for an application to access system services:
It's possible to arrange for two applications to share the same Linux
user ID, in which case they are able to access each other's files. To
conserve system resources, applications with the same user ID can also
arrange to run in the same Linux process and share the same VM (the
applications must also be signed with the same certificate).
Hope this gives some useful information to you.

Categories