How could one easily store some data in a simple GAE Google App Engine Application? Like username or some Address information, that should be available again if the application is either restarted or redeployed due to an update.
Is Datastore the way to go? Or what should I have a look at?
You can either use Datastore or Cloud SQL. The Getting Started Tutorial is actually demonstrating how to use the Datastore in case you haven't play with it at all.
From your question I am assuming the credentials are required for connecting to other services, rather than individual credentials for lots of different users.
So on that basis if you need to change them frequently then consider the datastore.
If infrequently and you don't mind updating you code base, then leave them in the filesystem.
Other things to consider how senstive are they, who can see them.
You may have more people that have access to the datastore than can deploy/download code base (assuming you left that capability turned on) which may also be a deciding factor.
Related
I made a web based application by using the java language, and I would like to monitor its performance periodically (e.g. response time). Also I want to display this information on the homepage of my application. Is that possible? Can I have any idea about how this can be made.
Thanks.
You can take a look at stagemonitor. It is a open source java web application performance monitor. It captures response time metrics, JVM metrics, request details (including a call stack captured by the request profiler) and more. The overhead is very low.
Optionally, you can use the great timeseries database graphite with it to store a long history of datapoints that you can look at with fancy dashboards.
Example:
Take a look at the github page to see screenshots, feature descriptions and documentation.
Note: I am the developer of stagemonitor
Depending on your environment, I would use a cron job or task that measures the response time to request your app using something like HttpClient. Then drop that information into a database table accessible by your app.
The answer here is the simplest way you can measure the time: How do I time a method's execution in Java?
Why not checkout Munin monitoring? The website says
Munin the monitoring tool surveys all your computers and remembers
what it saw. It presents all the information in graphs through a web
interface. Its emphasis is on plug and play capabilities. After
completing a installation a high number of monitoring plugins will be
playing with no more effort.
SLAC at the Stanford university also keeps a large, quite well sorted list with various solutions for network monitoring among other things. SLACs list of Network Monitoring Tools, check for instance "Public domain or free network monitoring tools".
You can also consider to create your own custom web application monitor. Therfore, use the ProxyPattern and and create a concreate monitor. By using Spring framework you can easily swich on and off the monitor during runtime without re- deployment or restart of the web application. Furthermore you can create a lot of different specific monitors by yourself and are able to control what is beeing monitored. This gives you a maximum of flexibility, but requires a bit of work.
It is possible.
The clearest way to go about it, providing true numbers is to simulate a client that performs some sort of activity that mimics the real usage. Then have that client periodically use the website.
This presupposes that your website has a means to accept inputs that do not impact the real back end business. Crafting such interfaces requires some thought, but is not beyond the ability of a person who could put together the web site in the first place. The key points are to attempt to emulate as much using the real website as possible, but guard against real business impact. Basically it is designing for a special user (the tester).
So you might have a special user that when logged in, all purchases are bound to a special account that actually is filtered out to appropriately not demand payment and not ship goods. Provided the systems you integrate with all share an understanding of this live testing account, you can simultaneously test alongside of real production post-deployment.
Such a structure provides a huge benefit. You get performance of the real, live running system. Performance tends to change over time, and is subject to the environment. By fetching your performance numbers on the live system, in the same environment, you get a much better view of what real users might be encountering. Also, you can differentiate and track performance for different activities.
Yes, it is a lot more to design and set up; however, if you are in it for the long run, the benefits are huge.
I guess JavaMelody is the most appropriate solution for you. It can be built into a Java application and due to this feature, it monitors the functionality inside the app. Using this platform, it’s possible to get much more specific parameters for your Java app, than via external monitoring. In addition, it allows you displaying some statistics on your app homepage. Moreover, you can build in the app the graphs from JavaMelody, that essentially facilitates the app monitoring.
Take a look at the detailed overview of JavaMelody: http://cases.azoft.com/enterprise-system-monitoring-solutions-business-apps/
I am working on a J2EE/MySQL based Social networking platform which allows users to Write Blogs, Ask Questions, Wiki Pages, Putting Challenges, Putting Custom Quizes etc.
Since this kind of platform generates lots of data, there is need to make sure that site performs well.
I have decided to use Memcached for completely caching data at Blog, Wiki, Questions levels from relational DB.
This will make sure that when ever user hits these pages, data will be delivered from memcached. Hence DB load will reduce.
DB will be hit only when there is a new record added and in some rare cases.
Further to this, following is where it gets complicated:
System also has a permission system.
Each user has different kind of permission on the data depending on user has created it or not and some other rules.
Caching Blog, Wiki, Questions level data with permission applied will need replicating data by session-and-user-id as key. This will need too much memory.
I have decided to fetch Blog, Wiki, Questions level data from Memcache and permission level data from DB and then apply permissions to data coming from memcached at run-time.
I am worried that applying permissions to such a huge data at run time might take longer processing time and may overshadow advantage given by Memcached cluster implementation.
Please let me know your inputs.
After I finish developing an app using Google App Engine, how easy will it be to distribute if I ever need to do so without App Engine? The only thing I've thought of is that GAE has some proprietary API for using the datastore. So, if I need to deliver my app as a .war file (for example) which would not be deployed with App Engine, all I would need to do is first refactor any code which is getting/storing data, before building the .war, right?
I don't know what the standard way is to deliver a finished web app product - I've only ever used GAE, but I'm starting a project now for which the requirements for final deliverables are unsure at this time.
So I'm wondering, if I develop for GAE, how easy will it be to convert?
Also, is there anything I can do or consider while writing for GAE to optimize the project for whatever packaging options I may have in the end?
So long as your app does not have any elements that are dependent of Google App engines you should be able to deploy anywhere so long as the location can support a Tomcat or GlassFish server. Sometimes this requires that you manually install the server so you must read up on that. There are lots of youtubes that help on this subject just try to break down your issue to the lowest steps possible.
I also suggest using a framework like spring and hibernate to help lessen the headaches. They will take a while to understand but are worth the headache if you want to be programming for the rest of your life.
I disagree with Pbrain19.
The GAE datastore is quite different from SQL, and has its own interesting eventually consistent behavior for transactions. That means for anything that requires strong consistency or transactions, you're going to have to structure your data with appropriate ancestors. This is going to have a pretty big impact on your code.
You're also going to need to denormalize your data structures (compared to SQL) to minimize datastore costs and improve performance. There's also many queries you can do in SQL that you can't do in GAE, you'd have to structure your app in ways to work around this.
Once you do any of this, you'll probably have a significant chunk of the app to rebuild.
You also wouldn't want to use Spring because it'll make your instance start up time pretty painful.
So unless it's a very simple hello world app, the refactoring will not be trivial - particularly once you begin using ancestors in any of your data modelling.
I recommend not trying to design your app to be portable if you're using the GAE datastore.
You'll have better luck making a portable app if you're using Cloud SQL.
So I'm trying to finally grasp how cloud-based, enterprise applications work, and what their architectures typically look like. Say I use a cloud provider like Amazon. I assume (please correct me if I'm wrong) that I would be paying for 1+ virtual machines that would house a stack of software per my application's needs.
I'm confused with how frameworks like jclouds or Terracotta fit into the picture. jclouds advertises itself as "an open source library that helps you get started in the cloud", and lists off a number of huge features that don't mean much to me without meaningful examples. Terracotta boasts itself as a high-scaling clustering framework. Why would I need to use something like jclouds? What specific, concrete scenarios would I use it for?
Again, if I'm using Amazon as my cloud provider, wouldn't they already be highly-scaled? Why would I need Terracotta in the cloud?
Taking an app "into the cloud" has at least two aspects.
Firstly you have to manage the nodes: deploy your app on all nodes, monitor them, start new nodes to actually scale, detect and replace failed nodes, realize some update scenario for new app versions, and so on. Usually this can't be done reasonably without tools. JClouds fits in here, since it covers some of these points.
Secondly your app itself must be "cloud ready". You can't take an arbitrary app, put it on multiple nodes and expect it to scale well. The main point here is to define how to scale the access to the shared data between all nodes (SQL database, NoSQL datastore, potentially session replication, ...). Usually you use some existent framework/appserver/datastore to manage your shared state. Terracotta is one of them, basically it provides an efficient way to share memory between JVM instances on multiple nodes.
So you have your Linux machine (virtual instance) and it is working OK. But suddenly you need to scale - that is you need to fire up more instances as demand go high and shut them as it goes down. So what you can do is basically use Amazon's API to start EC2 instances - provision them with everything you can do from the administrative console (and even more). But using amazon's API's basically ties your hands to amazon. With frameworks such as JCloud what you do is something like (this is pseudo code):
CloudProvider provider = new CloudProvider.getProvider("Amazon");
provider.authenticate("username", "password");
provider.startInstance("some option", numOfInstances);
So say you have to scale and you are deployed on Amazon using JClouds - you are going to use something like the above BUT suddenly you decide to move from amazon to Rackspace so instead of re-engineering all the logic of your app which has to do with provisioning instances and working with them you can just change the
CloudProvider provider = new CloudProvider.getProvider("Amazon");
to something like
CloudProvider provider = new CloudProvider.getProvider("RackSpace");
and continue using the authenticate method and startInstance but then the library would take of how to actually "translate" this library method to the specific method which the given cloud provider supports. Basically it is a way of abstracting the code which has to deal with the underlying cloud provider - you shouldn't care who it is as long as it is providing the service, right?
I've settled on the Play framework for the rewrite of our intranet portal. Our portal contains a lot of loosely related stuff so I'm looking for advice on if or how to break it into multiple Play applications.
What are the consequences of making it multiple applications? Is single sign-on still possible? How is access control affected? Am I likely to have to duplicate a lot of code/configuration between them? What else should I consider when deciding where to split things apart?
First of all I would think about modules, because otherwise you must start a lot of applications, which increase memory consumption. Only if your site is heavy loaded, so that you need multiple server, this doesn't matter.
Is single sign-on still possible? I would say, yes. You can store the data in a cookie, but must make sure that other urls can read it.
Am I likely to have to duplicate a lot of code/configuration between them? Well if you use similar databases this would be another drawback compared with modules, but I wouldn't be to worry about this 1 file of configuration. Code which needed in more than one application can be easily shared via jar-files as a library or you use modules for this.
I've since discovered that being stateless on the server side means Play uses HMAC hashes stored in cookies along with the username to keep track of sessions. If multiple Play applications are to be authenticated against the same set of credentials (OpenLDAP in my case), they just need to have the same application.secret configured in conf/application.conf in order to achieve single sign-on.