The Three20 project is really nice for building iPhone apps quickly using common libraries:
https://github.com/facebook/three20
Is there anything like this for Android?
Not exactly, but working at a company with a partially three20 based iPhone app developed in parallel with the Android version, I think about 50% of what 320 does you get right out of the platform on Android, minus a little polish. For example, 320's Navigator and TextEditor are basically baked in on Android - the platform's native text editing components can stretch dynamically on their own, and task navigation and back-button history is handled automatically on Android, with URL handling baked in to the intent filter and resolution system.
You can get much of the rest of 320's functionality out of reusable libraries like ignition or GreenDroid (at least with regards to caching and images loading in lists), without the weight and lock-in a fairly monolithic framework like 320 can add to your app. There's a few bits that these solutions miss (three20's zoomable photo viewer, for instance), but there's usually acceptable hackarounds for quick usage (an Android WebView makes a pretty decent image viewer substitute, for instance).
Once upon a time there was an SO wiki page gathering a bunch of those resources, but alas, that's gone away. You can get a pretty good set by looking for popular Android projects on GitHub or Google Code, though.
See Do android developers commonly use 3rd-party UI/networking libraries like Three20 on iPhone?
Related
The reason why most people would ever make their android project in javafx would be to have the same codebase across different platforms (such as ios, desktop, android, maybe even web using Bck2Brwsr/teavm/doppio)
But my question is, is there any advantage in javafx ui framework itself when compared to android ui framework?
I have never ever written even a hello world application for android, but I intend to do it now. So I am wondering if having the code in javafx is worth the effort when I can develop directly on android apart from the benifit of portability.
This type of question might result in a subjective/opinionated answer but I think it is a good question so I will provide my assessment.
Having the same codebase across all those platforms is huge. Do not dismiss this. I'm using Gluon Mobile to port aspects of the Deep Space Trajectory Explorer (DSTE) to Android and iOS. As you can see from the video its extremely complex application. There's no way I would rewrite that in native Android... it would be a no-go from a cost perspective.
Starting development from JavaFX makes it easier to make complex visuals. I don't just mean traditional 2D GUI forms. Again looking at the DSTE you will see we use Canvas to do dense renderings and JavaFX 3D along with the FXyz library to do 3D renders. These things are easy in JavaFX and again using Gluon simply "just work" on Android/iOS. In fact it only took about a day to get those aspects of the DSTE code base to work on a Pixel C tablet, most of which was getting the Gradle build setup properly. Now imagine having to port 3D code from JavaFX to a Native framework? I'm a 3D guy and I still wouldn't try it.
Testing is so much easier on the desktop than a mobile device. This doesn't mean the testing is 100% on desktop. Sometimes something that works on desktop "doesn't work" on the mobile platform and you have to tweak accordingly. However you can save a LOT of time standing up the application using JavaFX knowing that 90% of it will work the same on your mobile device.
Word of advice though... remember that a desktop application is NOT a mobile application. You will be tempted to just "port" your desktop app to your device. I was my first time. You can get into other issues where the interfaces and layouts you design for a desktop "work" on the mobile device but are not appropriate and so the usability goes down. Start slow when you port. Think of what aspects of your desktop workflow should be mobilized. Only port the things you absolutely belong in a mobile workflow. Save yourself some headaches.
I was looking for a way to develop iOS apps with Java. Especially Java because I want to be able to use Processing as a Java library.
First I found RoboVM. Just to find out Microsoft did shut it down after they bought Xamarin.
Then I found Intel's Multi-OS Engine, which is a technical preview right now. It looks like you can develop an Android app just like you used to do with Java and Android Studio. Then you rewrite the UI (and probably some iOS specific API calls) and build it for iOS. Either on a Mac with Xcode or in Intel's build cloud (which seems to be free).
Using Processing in Android apps is not a new thing (even if it would be new to me). But it looks like with iOS apps it's different.
Since you have to rewrite the UI for iOS, I not sure if it's still possible to use Processing the same way.
If that's not possible I wonder if it would possible/a good idea to call loadPixels() at the end of the draw function, then read all the pixel values and write them to an iOS UI element.
Would it use up to much CPU power to do that every single frame or could this be a solution if there's no other way?
Of couse that would only give me UI output for processing. Somehow I still have to get touch events into processing if I want to handle those events there.
In jQuery I can not only register a callback for an event with $("#myButton").click(myFunction); but also simulate an event with $("#myButton").click();. When you call the click function without any arguments the event is triggered on that DOM element instead of registering a callback for that DOM element and that event.
Is there a way in Processing to do something like that?
If so, I could get touch events from Multi-OS Engine an then pass them to Processing.
You can think of Processing as actually being two things: it's a library, and it's a set of tools that handle exporting for you.
If you're using the Processing editor, then you're using the tools that handle exporting for you. You can deploy as a Java application, or as an Android app, or even as JavaScript through Processing.js. These tools take your Processing code and then converts it into the format needed to deploy your code.
However, you can also use Processing as a Java library, just like you would any other Java library. You do this by simply adding Processing's jars to your classpath, and then you can call Processing functions exactly like you can call any other library. If you do this, then you're in charge of writing your code and then deploying it. But it's certainly possible to use Processing as a Java library to draw to an image, and then draw that image to a native component.
Where it gets tricky is that you can't just write Java for iOS, so you can't just write code that uses Processing as a Java library. That's what RoboVM helped with. You might want to check out one of the alternatives mentioned in RoboVM's closing announcement:
Depending on where you are in the development of your apps, there are several options available to move forward, including tools that will help you port to Xamarin, and alternative Java SDKs which target iOS. In particular, libGDX has just announced their support for Intel’s Multi-OS Engine, which means there is an alternative for the majority of RoboVM’s active developers.
Another option you might consider is using Processing.js or p5.js to deploy as html and JavaScript. Then you could just visit your webpage on your phone's browser.
We have a lot of days of research but can't find a solution for the following project. We need to convert a flux project to IOS and android native app. But as flux supports flash scripting it has easily implementing some 3d effects like shadow, emboss gradient etc. Please check the link here for seeing the swf file we have. We need to convert all this features into a native IOS and android app. We have research some area and found that most of the item we can implement except one icon here. The fourth icon have some 3d effects, shadow effects, border, emboss, contour and gradient etc. Can anybody check on this and guide us whether this can be implemented in IOS and android. I am pasting the entire url here again http://projects.zoondia.org/signfabcreator/signCreator.swf. Please check and let me know if this is possible. Let me if this is possible or not. If yes it will be helpful for me if anybody can give me a clue about implementing those in both android and ios
Very interesting! But I'm afraid you have to reimplement all this functionality by yourself. Don't be upset. There are good news for you - OpenGL ES and GLSL are extremely portable. So you can reuse 100% of your shaders. What is even better now you can share the other code too and stay native. Not long ago Intel announced the Multi-OS Engine. It enables you to develop native mobile applications for iOS and Android with Java. There are a bunch of tutorials inside installation package. One of them is especially dedicated to cross-platform OpenGL capabilities. Please check out my OpenGLBox sample.
I'm building a Java application that is some sort of Android applications (APK files) analyzer.
One of the main features that the app will offer is a "preview" of an Android layout, hence I need an API that receives an Android layout XML and a few configuration arguments such as screen resolution and theme, and returns the rendered layout as it would appear on a device running the application (graphical consistency with the real Android platform is important) along with position data of the View objects (in order to allow the user to select a view by clicking it). At the first stage, I don't expect the feature to reflect layout changes that are made programmatically, but only the View objects and resource graphics defined in the XML.
The idea I have in mind is to use the source code of a layout editor, such as ADT's editor or DroidDraw, and integrate it into my framework, but then I was wondering - maybe a better way would be to use the android API itself to render the layout for me (this is better mainly because I won't need to rewrite my code for later versions of the OS).
So my question is: does the API allows such operations? Or is there an even better way?
Any suggestions and insights are welcomed :)
does the API allows such operations?
If by "java application" you mean an app that runs on your PC, then no. There's no straightforward way to even call anything in the Android API. I'd recommend you go with the first approach of integrating some existing source code.
That said, this is not a straightforward task either. Also, if you're analyzing an APK, you'll be working with binary XML files, not the easy-to-read plain text ones that you see when developing (which assumedly are what ADT/DroidDraw use). There may be source code out there to deal with that too.
You could also consider looking at the source for Android itself, but I imagine you'd have to re-implement a bunch of rendering code, so that's no easy way out either.
At the first stage, I don't expect the feature to reflect layout changes that are made programmatically, but only the View objects and resource graphics defined in the XML.
Reflecting the layout changes made programmatically will be virtually impossible to do in a reasonable way.
This task is definitely possible, however, it's not straightforward at all. I would suggest taking a look and Android Studio's source code, more specifically there is a tool called LayoutLib.
This is the tool that the IDE's layout preview/editor uses to render layouts. You can use this to render layouts and views that you have the source code for. Unfortunately, it's not very well documented, so you have to figure out the usage from IDE's sources.
The open source Itsnat has a way to render loaded XML Android Layout files directly. it has a sample app that compares the standard (binary compiled) versions with the dynamic. My work with it shows that it does a good job reproducing all the quirks of LinearLayout/Relativelayout, etc. https://github.com/jmarranz/itsnat
How do I create a J2ME app for cellphones with a GUI similar to the menus you see in Java games? I've tried MIDlets with Netbeans but they only show you one GUI element at a time. (textbox, choice, login, etc)
And which Java IDE would you typically design these GUIs in? Netbeans or Eclipse? and is IntelliJ IDEA usable for this aswell?
Do I have to write/get a library that draws GUI controls to screen via bitmap functions .. and keeps track of the keys pressed for focus?
Try to use LWUIT - nice UI toolkit for j2me:
https://lwuit.dev.java.net/
http://lwuit.blogspot.com/
You can also use minime: http://code.google.com/p/minime/
It's an open source GUI library for j2me. miniME works on canvas level (lowest level in j2me) to draw every control so your UI will look exactly the same whatever the handset it'll be running on. Other advantage are:
- miniME uses its own event loop to manage user controlled event (botton pressed, softbar, ..), so you Application will "behave" the same whatever the handset.
- miniME support the concept of Views and stack of view, in order to make navigation between different view/screens very easy.
Here is an example: A View is what you have on the screen at a given moment (for example the main menu screen), then to go to a sub menu, you create a new view, and by calling a simple API, you push it in the stack of Views. The previous view (the main menu) is still existing, but inactive. When the sub menu view complete his work (for example, user press back, or do a selection), you can just go back to the previous view by calling a pop api.
Your question is a bit vague to give a specific aswer, but you might want to check out LWUIT or Polish, you can develop both with either Eclipse or Netbeans.
As far as designing GUIs go, neither IDE will help from a visual perspective. J2ME UI development is all done in code, beyond creating any initial graphics in a proper graphics editor you don't get to see your output until you test.
Read up on the LCDUI package documentation which explains how the UI classes work and the differences between the 'High-level' and 'low-level' APIs.
I can't comment on which IDE to use - but I do know that to create custom UI (like the ones you see in J2ME games), you have to explicitly draw the GUI controls.
Beware that you may need to customize the GUI depending on the target phones. You have to cater for different screen sizes, key pad configurations, default theme etc. This would probably mean that you need different builds for things like different screen sizes which would drive up your Java Verified certification costs (if you need it).
You may be able to find a set of nice looking UI controls that you can buy online and use (try J2ME Polish). The easy way out of course, is to use default J2ME controls :)
Links to many j2me GUI libraries: link1, link2
I know that kuix is not bad and free - watch demo.
But i prefer to make my own gui elements - this is much more flexible (but takes some time).
As for IDE - you may want to make some kind of gui-editor tool, construct interface in it, save result to some file, and read it from your app.
It's way too cumbersome to write your own GUI, especially since there are so many available these days. If you're familiar with desktop development in VB.Net and C#, you might find "J2ME GUI" easy to use. You can download it from http://www.garcer.com/. It has a similar feel and makes it easy to learn. This is the kind of GUI that I expected to come standard with MIDP2 when I started mobile development. Would have solved a lot of issues.
If you are familiar with web stuffs then you can use KUIX (kalmeo.org/home/index) framework having xml and css supports. In place of It you can use also Polish framework (www.j2mepolish.org) it's also uses the xml in easy way rather than kalmeo kuix framework.