Custom Lock Screen Implementation Techniques - java

So, I have been exploring many similar questions across website (this, this and this and many more). People wanting to implement their own custom lock screen (not talking about widgets). So far there have been two implementation techniques which users are using.
Home Screen Replacement. In this technique its suggested to create a home screen application, where after unlock logic the default screen shows up. I believe that in this situation developer has to disable Home, Search, Menu and Back button when the lock screen is visible and to implement the Screen off-on logic.
An application. In this technique a normal app is made where after unlock logic the default screen shows up. I believe that in this situation developer has to disable Home, Search, Menu and Back button when the lock screen is visible and to implement the Screen off-on logic.
Now, I don't understand that what is the difference between two approaches? StackOverflow community seems to stress more on the home screen replacement technique. I am very new to Android development so I might be missing some aspect about it. Please suggest that which approach should I use and why? (also, which is easy?).
Thanks so much!

I would use the first method, but only for usability reasons. Because it gives the user a choice to easily revert back to the original homescreen/lockscreen if he chooses not to make the new one a default choice yet.
I'm afraid both strategies you described are quite difficult (depending on the api level range you want it to work on). The difficulty is not in their difference, the difficulty is in overriding the buttons (as Google makes it more difficult by closing down security loopholes for the newer api levels).
PS: Please note that Jellybean has a new Daydream functionality. If customizing the lockscreen is all you need. That may be the way to go since Jellybean is much more secure in that respect and more difficult to work with than the previous api levels otherwise.
Also, consider using the HTC screenlock api for HTC devices. This way, your solution won't be too hacky at least for their newer devices. And perhaps, do a version for rooted devices as well, since that too should be easy, for users who already have obtained root on their device. Don't discount the rooted market, users with root access do spend a disproportionate amount of money on applications in Google Play. That much is obvious if you just take a look at some of the rough numbers of downloads for paid applications that say (for root only) that Google Play gives you.

Related

Optimal camera focus mode android java

I have android application that use the phone camera.
First the camera opened, I want to try to use "Autofocus" mode and "Macro" mode and choose by code the best focus that I get.
I would like to ask 2 things:
Is there an internal parameter that give the focus score?
Is there a known algorithm that gives the focus score(the algorithm should not be complex, because I do it in real time on 1080p video).
I know a bunch of links is not usually helpful, but I don't have time to go through all the pages. I figure something is better than nothing :)
This is the link for the android.hardware.camera2.params package summary. I wasn't able to find anything like what you are looking for, but that's a good place to start.
Another person had a similar question on the Android Enthusiasts SE site: Can I manually focus the camera on my Android phone?
And last, but not least: There seems to be quite a bit of relevant info at the XDA Developers forum.
Good Luck! I'm a photographer myself, so this seems like an interesting project.

How to detect volume buttons when screen is off in energy-saving way

I would make an application that handle the volume buttons when screen is off. The goal would be to turn on or off the front LED.
I know that there many topics here that talk about it, but the recommended solutions (like PARTIAL_WAKE_LOCK) seem to be energy intensive and drain the battery very quickly!
What I want is a solution that is as energy efficient as possible. Is this possible? Maybe some kind of hooking?
Please note that the solutions based on scheduled tasks can not be envisaged for this project because I want detect keys in real time (or close to it)!
Take a look at this question.. if you already haven't ..
Just to make one thing clear. If something is not documented in API docs of android then any hack or workaround you find won't be reliable as Google may decide to change things in future releases, for example there is nothing documented about creating shortcuts after the app is installed ! But Since Android source code is available, developers took that piece of code as how was playstore creating shortcuts.. but its not documented so Google may change it in future !

Java ME on Siemens CX70

I have very old Siemens CX70 in working state and just don't want to throw it out. My idea is to use its math power and peripherals (GSM module, USB, Camera and screen) to build some simple applications for home use (multichannel termometer, timer and cheap security system - for examples).
I know I should use Java ME and IDE (I love Netbeans, for example). Can you tell me what I need more to start developing? I know Java well, I just need to make an environment to developing, debug and deploy. Mobile library documentation will be very helpful too.
Thanks.
There are so many online tutorials about this topic that the only right thing to do is to refer you to google.com
Search after "getting started with j2me".
However, there's something else you should know upfront before getting too excited.
The security model in JavaME will prevent you from doing much useful stuff, in relation to some of the things you mention.
Every time you try to access certain things in the phone, like e.g. the camera, or send SMS, or read/write a file on SD card, etc etc - the phone will show a popup "This app is trying to access camera. Allow this?". And the app will only continue after a manual click on Yes.
As you can imagine, this of course renders a lot of ideas useless.
In order to prevent these popups, you can sign your app with a certificate you buy from Thawte or Verisign. But as that'll cost you $300 a year, it's not the way most sparetime hobby developers chooses.
Personally, I found another way, but it requires you to use a phone from Sony Ericsson.
Because the old Sony Ericsson phones can be patched in order to remove the Java security. After doing this on one of my old phones, I've been having fun making apps like the ones you mention. For example, an app that keeps an eye on my home when we're out, by taking a picture every second. If it detects a difference in the picture, it sends me an MMS with the picture. :-)
I have searched a long time for patching options for other brands, but I just can't find anything useful. Nokia should supposedly also be patchable, but I just can't find anything useful about it.
So in short: If you'd like to make some sparetime hobby apps on a phone like that, you should either find a Sony Ericsson phone and patch it - or go dig up an old used Android device.
Good luck.

How to detect low-spec mobile devices meaningfully?

Right now I only know about Scientiamobile's WURFL and a few others. Those libraries or databases tell you quite a lot of things about the device but none of them can clearly indicate that you shouldn't use CSS transitions or other sorts of animation because even if the device supports it, its a complete different story that those features will run smoothly and this is my major concern when building mobile web apps.
Is it technically possible to 'classify' devices in this direction, using the 'WURF' database ? And which device capabilities I should use to 'group' devices as 'fast' in terms of graphic power ?
Finally, I just need a rating of the device from 1-5 in order to decide which gfx operations I can use.
well, any thought is welcome. It turns out as real brainer and the researches on internet didn't bring up anything useful except lots of data about device caps.
Update-1 : I just got a response from ScientiaMobile : "we have been playing around with the idea of some form of Javascript performance index (possibly based on one of the existing benchmarks) that could give some indication of that, but we are still not there yet. The problem is complex."
Update-2 : The biggest bottlenecks we discovered in mobile web apps
animation power
PNG transparency
text and box shadows
image resizing
For us its really enough to figure out that we need to disable those features as they can bring any application to its knees. Possibly, there are also other approaches.
Thank you.
Unfortunately, I do not believe this is possible today for the general case.
If you are only interested in a limited number of devices, of course you could test each and target those specifically via user agent or JavaScript-based detection.
Within the context of a thick app (e.g., you "wrap" your web site with something like Apache Cordova), it would be possible to provide JavaScript access to some of the device internals (e.g., amount of total memory, amount of free memory, processor speed), but otherwise, this information is not available from the browser. As you've hinted at, having access to this type of device information may still be insufficient (e.g., seemingly "high spec" devices that perform poorly).
JavaScript feature detection libraries like Modernizr can answer whether something such as box-shadow and text-shadow is supported by the user's current browser, but does not provide information about how well or how quickly supported features will be rendered.
Likewise, the datasets from Browserscope and related project ringmark (somewhat of a JavaScript analog to WURFL) answer these browser support questions on a per-browser-version basis through crowdsourced benchmarking tests (e.g., does the iPhone support CSS3 transitions?), and for the general case, this is what would be necessary. You would need to run a benchmark test for the various features in question and assess real-time performance. However, even this has its limitations:
Because the necessary conditions for speed (available memory, processor, battery, network connection, etc.) are constantly in flux as mobile users move around, receive calls, change hardware settings, launch background applications, etc., the result of the benchmark is likely to be unreliable/unrepeatable.
Benchmarking takes time and will invariable add a (hopefully unnoticable) delay.
Depending on the feature, benchmarking may not be practical.
Features may behave differently in combination (e.g., animating transparent PNGs with shadows) or at scale (e.g., every image on the page is animating) than individually in the benchmarking test.
If you rely on benchmarking datasets instead of performing your own real-time benchmarking, the sample size, scope, and age of the dataset greatly limits its usefulness.
A final point is that I haven't even addressed is the fact that performance is rather subjective. Say it were somehow possible to assess/predict the speed of an animation. If the animation will run at 15 fps, should the user see that animation? What about 5 fps? Who gets to be the ultimate arbiter that decides the threshold for whether or not a given feature performs well enough?
The best advice I can offer today is to reduce (or eliminate) your reliance on the troublesome features for the time being. It may seem terrible to suggest going back to "the old way" of using images with precomposed shadows or making background gradients without CSS3, but at the end of the day the user experience should take precedence over using the shiney new technology. Many mobile devices are simply not there yet, and neither are the detection methods. If you must use these features, perhaps consider a simple but unobtrusive way for users to opt-in/opt-out like Gmail's "standard" vs. "basic HTML" view options, or consider automatically doing the opt-in for known good browsers.
I can't add much more than 'user113215' already said. Also its not an answer to the actual question but rather to the actual problem :
I did experiment with a few users and we were using a simple welcome popup menu, asking the user to turn off special effects such shadows and animations. The most of the test users did appreciate the choice and clearly understood the means of such menu. We do integrate this now more advanced and in conjunction with a hidden benchmark for auto pre-selection of GFX effects.
Thank you.
g
In short, no. How we define the smoothness of animations and graphics is solely depend on the FPS(frame-per-second). And in this question we are talking about "web-app"s, which are making use of HTML and JS on the client-side. Since none of the client side provide interface for the programs to get the FPS by HTML or JS, it is impossible to tell if the client is smooth or not.
However, if you really want a benchmark on the performance of web-app. You can make use of stats.js to monitor the change of stat and have a benchmark for you to suggest your client activating or disabling any sort of effects on runtime. This method even working with most FXs of Javascript libraries like jquery too. But this will take sometime for you to get enough data before applying changes and the stat may differ from the status of the device, such as memory usage, concurrent applications, etc.

how does android understand which home screen is being viewed by the user?

I wanted to understand how does the Android OS figure out which home screen the user is viewing currently and render the appropriate icons and widgets on that screen based on the user's left or right swipe on the touch screen of the device.
The OS must save a state of the screen and IDs or something relative to the objects placed on the screen to retrieve the state each time the screen becomes visible.
From my research I understand that Android OS treats all the 7-8 homescreens on devices as one single host.
Also my question might seem vague, but the reason why I am asking is because it seems reasonable that app widgets on android devices, update not only when the phone is awake but also only when the app widget itself is visible. I know that Google has declined the enhancement request by many others but I don't think that is good enough. Link here.
That is the reason why I am trying to give it a shot to understand and implement it for my app with whatever Android knows about the state of the home screens.
Any help or insight is much appreciated. Also the experts out there let me know if you think this can be even implemented for one off apps at all?
Well, as the link you posted clearly states, there's no way to know.
Also, if you consider the fact that "Home" is just an application like all the others, it makes even less sense to have a unified API for that. A lot of people use Launcher Pro or similar applications, which would probably not implement it.

Categories