Alternate use cases for DHIS2 - java

Good day all.
Has anybody used DHIS2 for alternate cases, such as alien plant invasion research, and soil erosion on coastlines research.
Please can somebody confirm as well, if DHIS supports image capture in the field through the android app. - as I would like to use this platform for the above 2 cases mentioned.
Assistance is much appreciated for my questions.
Implementing and installing this

DHIS2 can be adapted for a vast range of use cases. The main thing that will guide you in deciding whether DHIS2 is the correct type of software to use is the location or geographical context where your data will be anchored. In DHIS2, this is referred to as "Organisation Unit". In the health domain, this is the health facilities and this organisational units can be structured hierarchically, starting from the lowest health facility to, say, the whole district that has multiple health facilities. In your case, the organisation unit can be the different labs where your research is conducted from.
Yes the DHIS2 Capture Android app can capture images and upload them to your server on the latest updates of DHIS2 software. The DHIS2 Capture documentation describes what is possible with the app.

Related

Android Ibeacon Indoor Location experiences

It's a few days i'm trying to build an app for indoor location of a user in to a room.
I'm using Estimote SDK . I had BAD results even if I tried many altrenatives.
I used, thrilateration, quadrilateration, some alhgoritms based on media and variance(made by myself) trying to reduce the noise...
I had unsatisfactories results (too wide fluctuations in very small range time) and i'm wondering if someone had any good experience in this kind of applications.
I know that results are good with IOS and I'm wondering if is it possible to replicate them in android system and if somone did...and could eventually help me.
Thanks,
Federico.
My personal opinion from experience I am building is that BLE beacons are not designed to easily provide location. They are only designed to provide "presence" ie. "you are near a point". And by near it means within a few metres, so that the signal is strong and you can be sure of being nearby. (Although I'm not convinced they are reliable even for that)
There are several companies that are doing trilateration or similar to get better accuracy out of multiple beacons, such as:
www.pointrlabs.com
indoo.rs
Estimote Indoor Location SDK
ensolocate.com
It seems from their publicity that they have this working. So it must be possible, but I suppose there is a lot of trial and error in getting practical algorithms, and a signal strength survey of the venue seems to be needed. I have not been able to find any unbiassed/independent review of these systems.
the fact is that ibeacon (i tried on ios with estimote earliest release) have RSSI no affidable. The fact is that you should not use it for detecting distance in meters but in "zones" like near, far, ecc

Is it possible to get access to installed apps shared data in android?

Health Mobile applications have rich data of their user physical condition and normally store it in their cloud databases.
I wonder if there is an app which got data from their users in a local database and shares it with other apps that are installed in user phone(Something like Android Interface Definition Language for example)?
using other health tracker apps data
Step #0: Hire qualified legal counsel and discuss your plans regarding taking health information from other apps. Please note that this subject area (personal health data) usually has a lot of regulation around it, and you may be subject to civil or criminal penalties if you are not careful.
Step #1: Come up with a list of "other health tracker apps".
Step #2: Contact the developers of those apps and see if they have any sort of API for allowing third parties access to those apps' data. Due to the aforementioned legal issues, I expect that few will say yes, but there may be a few who do.
Step #3: Create your app, using those health apps' APIs. The details for doing that would depend upon those specific apps and their specific APIs.

Java ME on Siemens CX70

I have very old Siemens CX70 in working state and just don't want to throw it out. My idea is to use its math power and peripherals (GSM module, USB, Camera and screen) to build some simple applications for home use (multichannel termometer, timer and cheap security system - for examples).
I know I should use Java ME and IDE (I love Netbeans, for example). Can you tell me what I need more to start developing? I know Java well, I just need to make an environment to developing, debug and deploy. Mobile library documentation will be very helpful too.
Thanks.
There are so many online tutorials about this topic that the only right thing to do is to refer you to google.com
Search after "getting started with j2me".
However, there's something else you should know upfront before getting too excited.
The security model in JavaME will prevent you from doing much useful stuff, in relation to some of the things you mention.
Every time you try to access certain things in the phone, like e.g. the camera, or send SMS, or read/write a file on SD card, etc etc - the phone will show a popup "This app is trying to access camera. Allow this?". And the app will only continue after a manual click on Yes.
As you can imagine, this of course renders a lot of ideas useless.
In order to prevent these popups, you can sign your app with a certificate you buy from Thawte or Verisign. But as that'll cost you $300 a year, it's not the way most sparetime hobby developers chooses.
Personally, I found another way, but it requires you to use a phone from Sony Ericsson.
Because the old Sony Ericsson phones can be patched in order to remove the Java security. After doing this on one of my old phones, I've been having fun making apps like the ones you mention. For example, an app that keeps an eye on my home when we're out, by taking a picture every second. If it detects a difference in the picture, it sends me an MMS with the picture. :-)
I have searched a long time for patching options for other brands, but I just can't find anything useful. Nokia should supposedly also be patchable, but I just can't find anything useful about it.
So in short: If you'd like to make some sparetime hobby apps on a phone like that, you should either find a Sony Ericsson phone and patch it - or go dig up an old used Android device.
Good luck.

How to detect low-spec mobile devices meaningfully?

Right now I only know about Scientiamobile's WURFL and a few others. Those libraries or databases tell you quite a lot of things about the device but none of them can clearly indicate that you shouldn't use CSS transitions or other sorts of animation because even if the device supports it, its a complete different story that those features will run smoothly and this is my major concern when building mobile web apps.
Is it technically possible to 'classify' devices in this direction, using the 'WURF' database ? And which device capabilities I should use to 'group' devices as 'fast' in terms of graphic power ?
Finally, I just need a rating of the device from 1-5 in order to decide which gfx operations I can use.
well, any thought is welcome. It turns out as real brainer and the researches on internet didn't bring up anything useful except lots of data about device caps.
Update-1 : I just got a response from ScientiaMobile : "we have been playing around with the idea of some form of Javascript performance index (possibly based on one of the existing benchmarks) that could give some indication of that, but we are still not there yet. The problem is complex."
Update-2 : The biggest bottlenecks we discovered in mobile web apps
animation power
PNG transparency
text and box shadows
image resizing
For us its really enough to figure out that we need to disable those features as they can bring any application to its knees. Possibly, there are also other approaches.
Thank you.
Unfortunately, I do not believe this is possible today for the general case.
If you are only interested in a limited number of devices, of course you could test each and target those specifically via user agent or JavaScript-based detection.
Within the context of a thick app (e.g., you "wrap" your web site with something like Apache Cordova), it would be possible to provide JavaScript access to some of the device internals (e.g., amount of total memory, amount of free memory, processor speed), but otherwise, this information is not available from the browser. As you've hinted at, having access to this type of device information may still be insufficient (e.g., seemingly "high spec" devices that perform poorly).
JavaScript feature detection libraries like Modernizr can answer whether something such as box-shadow and text-shadow is supported by the user's current browser, but does not provide information about how well or how quickly supported features will be rendered.
Likewise, the datasets from Browserscope and related project ringmark (somewhat of a JavaScript analog to WURFL) answer these browser support questions on a per-browser-version basis through crowdsourced benchmarking tests (e.g., does the iPhone support CSS3 transitions?), and for the general case, this is what would be necessary. You would need to run a benchmark test for the various features in question and assess real-time performance. However, even this has its limitations:
Because the necessary conditions for speed (available memory, processor, battery, network connection, etc.) are constantly in flux as mobile users move around, receive calls, change hardware settings, launch background applications, etc., the result of the benchmark is likely to be unreliable/unrepeatable.
Benchmarking takes time and will invariable add a (hopefully unnoticable) delay.
Depending on the feature, benchmarking may not be practical.
Features may behave differently in combination (e.g., animating transparent PNGs with shadows) or at scale (e.g., every image on the page is animating) than individually in the benchmarking test.
If you rely on benchmarking datasets instead of performing your own real-time benchmarking, the sample size, scope, and age of the dataset greatly limits its usefulness.
A final point is that I haven't even addressed is the fact that performance is rather subjective. Say it were somehow possible to assess/predict the speed of an animation. If the animation will run at 15 fps, should the user see that animation? What about 5 fps? Who gets to be the ultimate arbiter that decides the threshold for whether or not a given feature performs well enough?
The best advice I can offer today is to reduce (or eliminate) your reliance on the troublesome features for the time being. It may seem terrible to suggest going back to "the old way" of using images with precomposed shadows or making background gradients without CSS3, but at the end of the day the user experience should take precedence over using the shiney new technology. Many mobile devices are simply not there yet, and neither are the detection methods. If you must use these features, perhaps consider a simple but unobtrusive way for users to opt-in/opt-out like Gmail's "standard" vs. "basic HTML" view options, or consider automatically doing the opt-in for known good browsers.
I can't add much more than 'user113215' already said. Also its not an answer to the actual question but rather to the actual problem :
I did experiment with a few users and we were using a simple welcome popup menu, asking the user to turn off special effects such shadows and animations. The most of the test users did appreciate the choice and clearly understood the means of such menu. We do integrate this now more advanced and in conjunction with a hidden benchmark for auto pre-selection of GFX effects.
Thank you.
g
In short, no. How we define the smoothness of animations and graphics is solely depend on the FPS(frame-per-second). And in this question we are talking about "web-app"s, which are making use of HTML and JS on the client-side. Since none of the client side provide interface for the programs to get the FPS by HTML or JS, it is impossible to tell if the client is smooth or not.
However, if you really want a benchmark on the performance of web-app. You can make use of stats.js to monitor the change of stat and have a benchmark for you to suggest your client activating or disabling any sort of effects on runtime. This method even working with most FXs of Javascript libraries like jquery too. But this will take sometime for you to get enough data before applying changes and the stat may differ from the status of the device, such as memory usage, concurrent applications, etc.

Using the Google Maps API in a 3D Java scenario

I am writing my dissertation this year in software engineering. I have a cool project in mind but wanted to ask if this was even possible first before I mention it to my tutor. If it is possible then it would be good if someone could point me in the right direction in achieving my goals.
My ideas is this. I want to make a Zombie survival game, generic and boring I know, but I want players to be able to run around in their own hometowns as if a zombie outbreak actually happened, the easiest way to do this would be to use the Google Maps API (I think) so maps are automatically created depending on a users home location.
And added feature would be to implement transport systems using local train stations so users can actually move to their friends areas in realtime on a multi-player platform.
So from what information I know already I would need these tutorials/resources.
The ability to import real time google maps into a Java environment
The ability to see where roads occur and where areas that can be walked on occur, maybe using the colour green as a basis for fields etc.. This is to ensure users can only walk within defined areas
The ability to generate surroundings (such as houses and fences) next to roads and walkable areas rather than have the user running around on a flat 3D environment
I already have some knowledge on creating 3D environments as it is using the JWJGL library. Each of the points stated above are dependent on the previous bulleted point. Any feedback or constructive criticism is greatly appreciated. Even just a "What the hell are you even talking about" would be helpful also.
Glad to see your replies.
It might be possible using the Google Earth API: https://developers.google.com/earth/
You would have to get a web view of some kind, basically integrating a JavaScript application into your Java application. I would be cognizant of the terms of service though.

Categories