I am creating an image/graphics intensive application on android. Thus I have decided to keep images at server side and fecth them in batches when needed for each user. Apart from this I would like to manage some minor user data at backend for any future extension to the app or dynamic loading of some content.
For this I am looking out for the easiest but not a very rigid back-end solution. After some research I have boiled down to below mentioned options(In the order of priority):-
Amazon SDK for android :- It looks like this provides a lot of pre-built components but I am not sure how flexible it is when doing some custom back-end coding/feature implementation.
Parse :- Easy to understand and use but not flexible when it comes to custom feature development.
Amazon EC2 Java Backend:- I will have to do all the server side coding from scratch here but this will provide complete independence in feature implementations. Though I would love if I can find some code samples relates to user management, backend db management and java restful web services.
Any suggestions or pointers that you guys have in the above choice would be great
Thanks in advance
I have been using Parse but I haven't explored the other 2. So, this may not be a comprehensive answer but I would try to give you some pointers based on my experience with Parse.
I have been into Android development for quite some time now but I do not have any significant expertise (I would say very minimal) on the backend. Also, you mentioned you wish to work on graphics/image intensive application. As far as the application I use Parse for is more of user data and minimal images, (requiring extensive relational database).
Parse makes it really simple to create the backend structure. And the client SDK is also very powerful. Their API's are very straight-forward and doesn't require you to worry about writing complex queries, caching them and saving the data. Given my background as I mentioned above, I would say there is no learning curve involved into getting started with the dev. You can simply start building your app right away!
Also, Parse uses AWS S3 on the backend with Mongo-DB. So, I believe computation on the server side should not be a problem. Server side logic can be implemented using ParseCloud (requires some javascript). But, if you plan to write some complex algorithms, I am not very sure how much can that be done.
Documentation of Parse on Android is quite good to get through most of the dev. Extensive doc for iPhone dev.
As far as cost structure goes, it allows 1 million free API requests per month and this is very much sufficient to get through quite a number of users. In your case, the storage should be of more concern. Parse allows 1GB free and some 20 cents above per GB.
Hope this helps!
I am looking out for the easiest but not a very rigid back-end solution
Have you considered AppEngine? Here's a tutorial about how to get app engine working for you fast
You can store up to 5 GB of blob storage for free, should be more than enough for experimenting. If you go over you can pay the $0.13/GB/mo extra for blob storage, which is more than reasonable.
I don't know what kind of app you are doing, but I'll propose one approach.
Use https://imageshack.com/ for images.
Create your user saving data application with a lightweight webservice (REST+JSON)
and expose it at heroku (https://www.heroku.com/) with your prefered language/plataform.
It could be java or ruby.
Using imageshack for images will save cloud space for you and the service is quite fast.
Related
After I finish developing an app using Google App Engine, how easy will it be to distribute if I ever need to do so without App Engine? The only thing I've thought of is that GAE has some proprietary API for using the datastore. So, if I need to deliver my app as a .war file (for example) which would not be deployed with App Engine, all I would need to do is first refactor any code which is getting/storing data, before building the .war, right?
I don't know what the standard way is to deliver a finished web app product - I've only ever used GAE, but I'm starting a project now for which the requirements for final deliverables are unsure at this time.
So I'm wondering, if I develop for GAE, how easy will it be to convert?
Also, is there anything I can do or consider while writing for GAE to optimize the project for whatever packaging options I may have in the end?
So long as your app does not have any elements that are dependent of Google App engines you should be able to deploy anywhere so long as the location can support a Tomcat or GlassFish server. Sometimes this requires that you manually install the server so you must read up on that. There are lots of youtubes that help on this subject just try to break down your issue to the lowest steps possible.
I also suggest using a framework like spring and hibernate to help lessen the headaches. They will take a while to understand but are worth the headache if you want to be programming for the rest of your life.
I disagree with Pbrain19.
The GAE datastore is quite different from SQL, and has its own interesting eventually consistent behavior for transactions. That means for anything that requires strong consistency or transactions, you're going to have to structure your data with appropriate ancestors. This is going to have a pretty big impact on your code.
You're also going to need to denormalize your data structures (compared to SQL) to minimize datastore costs and improve performance. There's also many queries you can do in SQL that you can't do in GAE, you'd have to structure your app in ways to work around this.
Once you do any of this, you'll probably have a significant chunk of the app to rebuild.
You also wouldn't want to use Spring because it'll make your instance start up time pretty painful.
So unless it's a very simple hello world app, the refactoring will not be trivial - particularly once you begin using ancestors in any of your data modelling.
I recommend not trying to design your app to be portable if you're using the GAE datastore.
You'll have better luck making a portable app if you're using Cloud SQL.
There is web application, journalism related, that uses MySQL databases and presents a web based interface to users.
I want to build a iOS app that does a mobile interface as well. The UI is pretty easy and I have experience with that.
The problem is with the database, which I have no experience with.
I will be learning about databases and probably take the Coursera course on it. I am not asking you to teach me that. I just wanna know which technologies I should invest my time in over the next couple months.
My understanding so far is that the app should not talk to the database directly,
but rather there should be some one on the server talking to the database on behalf of the App.
This is the question and the part I want to understand clearly, so correct me if I am wrong.
I will have to write some sort of a unix program that runs on the server and talks to the db and then communicates back to app? how? using a web view? Using unix sockets to talk to the app? ssh? Which one is cool with Apple?
My preference for writing something like that on the server would be: python(have experience), java(have experience), and maybe ruby(no experience). I'd prefer to avoid scripting languages.
Are they ok? Which one is best suited? Also is this middle dude going to have to be on the same server that has the database or can be another machine on the internet(i'd prefer this, so i can put it on my own VPS and not have to screw up with the server machine)
This is similar to another question from tonight, but you're coming at it from a different angle.
In general terms, an iOS application that needs to be able to run in offline mode will need to have its own database. This means creating Core Data models to store all of the data required by the application. Internally this is stored in a SQLite database.
If you want to make an application that's online-only, it's somewhat easier since you won't need to worry about the Core Data part and can instead focus on building your service API. If you're familiar with Python then your best bet is Django to provide that layer. You'll need to implement a number of endpoints that can receive requests, translate that into the appropriate database calls, then render the result in a machine readable format.
Scripting languages are what power most back-ends even for massive scale systems. In most cases the database will be the bottleneck and not the language used to interface with it. Even Twitter stuck with Ruby until they hit tens of millions of active users, so unless you're at that level, don't worry about it.
For most applications, using HTTP as your transport mechanism and JSON as your encoding method is the way to go. It's very simple to construct, easy to consume, and fairly easy to read. There are probably a number of ways you might go about reading and writing this, but that's another question.
For small-scale applications where the number of users is measured in the hundreds then you can host the application and database on the same server. Even a modest VPS with 512MB of memory might do the job, though for heavier loads you might want to invest in a 1GB instance. It really depends on how often people are accessing your application and what the peak loads are like.
Hi DB experts out there,
what do you SQL experts recommend to substitute a couple of MS Access databases by s.th. more modern like JAVA / Oracle or JAVA / mySQL?
The databases are small, not more than a few thousand records each. so there is no need for performance speed on the DB side.
But all of the MS Access stuff has complex forms with colors (for information purposes), details, nested sub-forms and a lot of nested queries.
Since MS Access is hard to debug and misses modern development tools as those in Eclipse I think about a redesign of the old stuff.
Said with other words, what is the best way to substitute especially forms?
Is Java Swing a good library to rebuild all the form stuff?
Or should I stay with the old stuff?
It depends how much time you want to spend on your new-design and who is using MS Access.
As you said, your MS Access db is very complex. If you want to replace this with mysql/oracle, it may take you long time to redesign the presentation layer (as you said, colors, details and so on.)
If you have time, you can design as totally new MVC framework project instead of old MS Access by using all new technologies. And you can learn a lot.
Not really a db question, the forms side of it is access as an application language not a database, whatever you choose you are looking at a good deal of work in Java if that's your application language choice.
This is a serious question: can it look like crap? Whatever tool you use, you'll probably want some kind of form-generation support (just to move things along). Form generation tools are all bad. It's a rule. But, they're bad in different ways. Also, having said that, I've never used one for Swing, as my desktop app forms were easy enough to build by hand. JFormDesigner looks feature rich and has some good-looking forms to boot (but because of the rule, we know you'll hate something about it).
If you want to stay with the old stuff, I recall that you used to be able to use access on the front end and connect to a different database server (SQL server). Depending on what year the access system is, you may have to replace immediate if (IIF) statements and do some other translation, but it would give you a database that makes troubleshooting queries a little better.
I guess only you can decide "why" you want to do this. If it not broke, then why fix it?
You can use source code control with Access if you want. I cannot say the debugging tools in Access are great, but then most Access applications tend to not have tons of code anyway. (much of the forms etc. work without code). And the report writer has received some upgrades that makes it even better – still one of the best around.
And Access 2010 has web like controls and effects now, so your screens can look like this:
Even the above round buttons and shadow effects where built using ONLY the tools inside of Access. So the new design options are quite extensive.
Same goes for the new navigation system you see on the left side. (no thrird party tools were used for above. So here is a SMALL sample screen shot of some new design options:
Also +1 for those here that pointed out that moving the data to MySql or some such is NOT the same as what you going to develop the application with.
Access is more of the development tool and part then that of the just some tables. The tables can be sent off to darn near any system like SQL server, MySql etc.
The problem and question and challenge is that of building the application part with the code and logic.
Speaking of SQL server, Access 2010 has baked in support for the cloud edition of SQL server. So Access works with SQL Azure. So if you looking for a cloud play, this setup works with Access.
Access also allows your tables to moved up to the new office 365. This is a great low cost way to get into cloud computing. And the office 365 setup allows Access to go "off line" mode. This means your laptops can go out, run the Access desktop application, and when they find some wi-fi or get back to the office, they sync their data. This is a true automatic "replication" model but works without any coding on the part of the Access developer.
And if you have SharePoint, then your tables and "off line" mode works with that.
Last but not least, Access now supports web publishing of your database. This works with office 365 or SharePoint.
This web publishing is true cloud computing with unlimited number of users. The only real limits are the capacity of Microsoft's computer farm (and it is really big one!).
Access forms when web published are converted into "zammel" .net forms (XAML). The Access code you write in the forms are converted to JavaScript and in fact this code runs "browser" side. (so you are building true multi-tier applications). Your table procedures you write inside of Access go up and run server side – even on office 365 (Not even .net developers can have code run so easy on office 365 servers!)
For those who not seen the web ability, in the following video, I switch to running the Access application 100% in a web browser at the half way point:
http://www.youtube.com/watch?v=AU4mH0jPntI
Such web applications built in Access don’t' require ActiveX or Silverlight, and as such they run fine on my iPad.
So, I am not really sure if one needs to get "caught" up on all of the new buzz words.
But if you looking to using office 365 and publish web forms, then Access does this now.
And if you looking to use the latest and greatest new edition of SQL Azure that runs up in the cloud, then again Access can be used.
And if you looking to use Access with SharePoint which is really popular, then again Access can be used.
And if you want "cool" shaded buttons with cool "hover" effects, then the new Access designer has these types of choices:
So there is tons of neat-o buzzy gee wiz bang things you can do with Access. Heck you can even build custom ribbons in Access now!
However, if you have a few basic forms that work just fine now? Why not just stick with what works?
I vote for KISS.
No real need to get caught up in the latest fads, but if such is your cup of tea, Access does have lots of that "new stuff" to play with these days.
I am looking for an open source solution to store and monitor some application performances.
To be more precise, I use several Java components in the software I develop and I would like to gather performance statistics for each of these components in order to figure out on what I need to focus to keep fast processing.
The idea would be to send a message to a repository to store some timestamps (everytime a Java component starts or ends) and having a web interface to browse the timestamps, and do some analytics on top of them.
These needs seem really basic but unfortunately I haven't found anything on the web, probably because I don't know the right terminology for this kind of tools.
Could someone recommend me such a tool?
Thanks in advance !
Adrien
What you described is RRDtool that stores time-series data. To access it from Java, there is java-rrd.
I also get the impression that you are looking for whole solution instead of just data back-end. If so, check out following open source cluster monitoring system: cacti, ganglia and graphite. They all have web interface. Cacti and ganglia have RRD-like back-end, while graphite has its own whisper database, etc.
I have to do a class project for data mining subject. My topic will be mining stackoverflow's data for trending topics.
So, I have downloaded the data from here but the data set is so huge (posts.xml is 3gb in size), that I cannot process it on my machine.
So, what do you suggest, is going for AWS for data processing a good option or not worth it?
I have no prior experience on AWS, so how can AWS help me with my school project? How would you have gone about it?
UPDATE 1
So, my data processing will be in 3 stages:
Convert XML (from so.com dump) to .ARFF (for weka jar),
Mine the data using algos in weka,
Convert the output to GraphML format which will be read by prefuse library for visualization.
So, where does AWS fit in here? I support there are two features in AWS which can help me:
EC2 and
Elastic MapReduce,
but I am not sure how mapreduce works and how can I use it in my project. Can I?
You can consider EC2 (the part of AWS you would be using for doing the actual computations) as nothing more than a way to rent computers programmatically or through a simple web interface. If you need a lot of machines and you intend to use them for a short period of time, then AWS is probably good for you. However, there's no magic bullet. You will still have to pick the right software to install on them, load the data either in EBS volumes or S3 and all the other boring details.
Also be advised that EC2 instances and storage are relatively expensive. Be prepared to pay 5-10x more than you would pay if you actually owned the machine/disks and used it for say 3 years.
Regarding your problem, I sincerely doubt that a modern computer is not able to process a 3 gigabyte xml file. In fact, I just indexed all of stack overflow's posts.xml in SOLR on my workstation and it all went swimmingly. Are you using a SAX-like parser? If not, that will help you more than all the cloud services combined.
Sounds like an interesting project or at least a great excuse to get in touch with new technology -- I wish there would have been stuff like that when I went to school.
In most cases AWS offers you a barebone server, so the obvious question is, have you decided how you want to process your data? E.g. -- do you just want to run a shell script on the .xml's or do you want to use hadoop, etc.?
The beauty of AWS is that you can get all the capacity you need -- on demand. E.g., in your case you probably don't need multiple instances just one beefy instance. And you don't have to pay for a root server for an entire month or even a week if you need the server only for a few hours.
If you let us know a little bit more on how you want to process the data, maybe we can help further.