My Apache Wicket web application uses JDO for its data persistence in GAE/J.
On application start-up, the home page enqueues a task before it is shown (with zero delay to its default ETA). This task causes the construction of a new Wicket web page, in order to construct the JVM's singleton Persistence Manager Factory (PMF) instance for use by the application during its lifetime.
I have set the application to use concurrent requests by adding
<threadsafe>true</threadsafe>
to the application's appengine-web.xml file.
Despite this, after a single request to visit the application's home page, I get two application instances: one created by the home page visit request, and the other created by the execution of the enqueued task (about 6 to 7 seconds later).
I could try to solve this problem by delaying the execution of the enqueued task (by around 10 seconds, perhaps?), but why should I need to try this when I have enabled concurrent requests? Should the first GAE/J application instance not be able to handle two requests close together without causing a second instance to be brought forth? I presume that I am doing something wrong, but what is it?
I have searched Stack Overflow's set of tags ([google-app-engine] [java]), and the depreciating group "Google App Engine for Java" too, but have found nothing relevant to my question.
I would appreciate any pointers.
If you want the task to use an existing instance, you can set the X-AppEngine-FailFast header, which according to the GAE docs:
This header instructs the Scheduler to immediately fail the request if an existing instance is not available. The Task Queue will retry and back-off until an existing instance becomes available to service the request
It's worth checking out the Managing Your App's Resource Usage document for performance and tuning techniques.
Related
There are following points to make you understand about my application:
I have a traditional spring web application running on Wild-fly.
In my application I have view controller and other controllers.
I have web.xml file and jboss xml file to configure context path.
Request to controller comes through either ajax request or simple get
request from browser.
I want to keep safe my application from possible 'Slow HTTP Post Vulnerability'. For that I have decided if any request takes more than specified amount of time then my application release that connection and throw request time-out exception.
My question is :
How can I implement request time in traditional spring mvc application ?
Note : You are most welcome If you have any other solution to prevent 'slow HTTP post vulnerability'.
You could delegate each controller invocation to a separate thread and then monitor that thread if/until it breaches your timeout condition. Java's ExecutorService already supports something much like this with its awaitTermination() feature.
Using Spring's support for asynchronous controllers (or more generally; implementing non blocking services) would formalise this approach since (a) it would force you to delegate your controller invocations to a separate threadpool and (b) it would encourage you to safely manage the resources available in this threadpool. More details on this approach here and here.
But, however you perform this delegation once you have each controller invocation running in a separate thread (separate from the original invocation, I mean) you will then be able to control how long that thread can run and if it exceeds some configured timeout you can respond with a relevant HTTP status.
My app runs on multiple EC2 instances to ensure high availability. The default log level is INFO for the app. But sometimes for debugging purposes, I want to update the log level to DEBUG. The request to update the log level passes through the ElasticLoadBalancer which delegates the request to any one of the multiple EC2 instances. The log level for the app running on that instance is updated but apps on the other instances will still log at level INFO. I want all the apps to log at DEBUG level.
I am using Spring, SLF4J and Logback.
If I somehow make the log level information to be centralized, and the request will update the level on the centralized location, but still someone has to intimate apps on all instances about the change as app will never be requesting the log level.
If you want an AWS solution you can utilize sns.
Once your app gets instantiated, register its endpoint (using it's private ip) to an sns topic for a http notification.
Thus instead of changing your LOG level through the load balancer you can issue a sns message and the message shall be sent to the endpoints registered.
Keep in mind to deregister the http endpoint from sns,once the app gets terminated.
You might want to take a look at Zookeeper:
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services.
It's quite easy to setup and start small. The app running on your EC2 nodes just needs to implement a "listener/watcher" interface. This will notify your app when some configuration changed (eg. you decided you want to set the global log level to DEBUG).
Based on this configuration-change, all of your nodes will update the local log-level without you having to come up with all kinds of ELB-bypassing manual REST-calls to tell each node to update - exactly what zookeeper is solving:
Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them ,which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed.
When this works for you, you can add additional configuration to the zookeeper if needed, limiting the amount of configuration you need to package in the deployed apps or copied alongside them.
Amazons Remote Management (Run Command) allows you to run commands on your instances. You just need a simple script to change the loglevel.
But it is not easy to set it up and set grant all the needed IAM rights:
There are tags for an instance. Some tags exist by default and you can create your own tags. So, if we add a tag which identifies all those instances on which app is currently running, we can very easily fetch all those instances' IP addresses.
DescribeInstancesRequest request = new DescribeInstancesRequest();
Filter filter1 = new Filter("tag:Environment", Collections.singletonList("Sandbox"));
Filter filter2 = new Filter("tag:Application", Collections.singletonList("xxxxx"));
Filter filter3 = new Filter("tag:Platform", Collections.singletonList("xxxx"));
InstanceProfileCredentialsProvider mInstanceProfileCredentialsProvider =
new InstanceProfileCredentialsProvider();
AWSCredentials credentials = mInstanceProfileCredentialsProvider.getCredentials();
AmazonEC2 ec2Client = new AmazonEC2Client(credentials);
List<String> privateIps = new ArrayList<>();
ec2Client.describeInstances(request.withFilters(filter1, filter2, filter3)).getReservations().forEach(
reservation -> reservation
.getInstances()
.forEach(instance -> privateIps.add(instance.getPrivateIpAddress())));
for (String privateIp : privateIps) {
hitTheInstance(privateIp);
}
Here, I have used 3 tags to filter out the instances.
We run several spring batch jobs within tomcat in the same web application that serves up our UI. Lately we have been adding many more jobs and we are noticing that when we patch our app, several jobs may get stuck in a STARTING or STARTED status. Many of those jobs ensure that another job is not running before they start up, so this means after we patch the server, some of our jobs are broken until we manually run SQL to update the statuses of the jobs to ABANDONED or STOPPED.
I have read here that JobScope and StepScope jobs don't play nicely with shutting down.
That article suggests not using JobScope or StepScope but I can't help but think that this is a solved problem where people must be doing something when the application exits to prevent this problem.
Are there some best practices for handling this scenario? What are you doing in your applications?
We are using spring-batch version 3.0.3.RELEASE
I will provide you an idea on how to solve this scenario. Not necessarily a spring-batch solution.
Everytime I need to add jobs in an application I do as this:
Create a table to control the jobs (queue, priority, status, etc.)
Create a JobController class to manage all jobs
All jobs are defined by the status R-running, F-Finished, Q-Queue (you can add more as you need like aborted, cancelled, etc) (the jobs control these statuses)
The jobController must be loaded only once, you can define it as a spring bean for this
Add a boolean attribute to JobController to inform if you already checked the jobs when you instantiate it. Set it to false
Check if there are jobs with the R status which means that in the last stop of the server they were running so you update every job with this R status to Q and increase their priority so it will get executed first after a restart of the server. This check is inside the if for that boolean attribute, after the check set it to true.
That way every time you call the JobController for the first time and there are unfinished jobs from a server crash you will be able to set then all to a status where it can be executed again. And this check will happens only once since you will be checking that boolean attribute.
A thing that you should be aware of is caution with your jobs priority, if you manage it wrong you may run into a starvation problem.
You can easily adapt this solution to spring-batch.
Hope it helps.
This link says that earlier versions of Tomcat (before 7.0.54) "renews its threads" thru ThreadPoolExecutor.run().
Why doesn't the init() method of contained Servlets seem to get called again?
A Servlet is initialized only once, either at web application startup or upon first use.
The same instance will then be used to serve all incoming requests, if necessary even multiple requests at the same time (unless you use the deprecated option to synchronize access, but even then there will be just a single instance, and a queue of requests for it).
I've got an application leaking out java heap at a decent rate (400 users leaves 25% free after 2hours...after logoff all memory is restored) and we've identified the items causing the memory leak as Strings placed in session that appear to be generated by Portal itself. The values are the encoded Portal URIs (very long endcoded strings ... usually sized around 19kb), and the keys seem to be seven (7) randomly generated characters prefixed by RES# (for example, RES#NhhEY37).
We've stepped through the application using session tracing and snapping off heapdumps which has resulted in determining that there is one of these objects created and added to session on almost every page ... in fact, it seems like it is on each page that submits data (which is most pages). So, it's either 1:1 with pages in general, or 1:1 with forms.
Has anyone encountered a similar problem as this? We are opening a ticket with IBM, but wanted to ask this community as well. Thanks in advance!
Can it be the portlet cache? You could have servlet caching activated and declare a long portlet expiry time. Quoting from techjournal:
Portlets can advertise their ability to be cached in the fragment cache by setting their expiry time in their portlet.xml descriptor (see Portlet descriptor example)
<!-Expiration value is in seconds, -1 = no time limit, 0 = deactivated-->
<expiration-cache>3600</expiration-cache> <!- 1 Hour cache -->
To use the fragment caching functions, servlet caching needs to be activated in the Web Container section of WebSphere Application Server administrative console (see Portlet descriptor example). WebSphere Application Server also provides also a cache monitor enterprise application (CacheMonitor.ear), which is very useful for visualizing the contents of the fragment cache.
Update
Do you have portlets that set EXPIRATION_CACHE? Quote:
Modifying the local cache at runtime
For standard portlets, the portlet window can modify the expiration time at runtime by setting the EXPIRATION_CACHE property in the RenderResponse, as follows:
RenderResponse.setProperty(
PortletResponse.EXPIRATION_CACHE,
(new Integer(3000)).toString() );
Note that for me the value is a bit counter-intuitive, -1 means never expire, 0 means don't cache.
The actual issue turned out to be a working feature within Portal. Specifically, Portal's action protection which prevents the same action from being submitted twice, while keeping the navigational ability of the portal. There is a cache that retains the actions results for every successful action and uses them to compare and reject duplicates.
The issue for us was the fact that we required "longer than normal" user sessions (60+ minutes) and with 1,000+ concurrent users, we leaked out on this protection mechanism after just a couple hours.
IBM recommended that we just shut off the cache entirely using the following portlet.xml configuration entry:
wps.multiple.action.execution = true
This allows double submits, which may or may not harm business functionality. However, our internal Portal framework already contained a mechanism to prevent double submits, so this was not an issue for us.
At our request, IBM did come back with a patch for this issue which makes the cache customizeable, that is, let's you configure the number of action results that you store in cache for each user and thus you can leverage Portal's mechanism again, at a reduced session overhead. Those portal configuration settings were:
wps.multiple.action.cache.bound.enabled = true
wps.multiple.action.cache.key.maxsize = 40
wps.multiple.action.cache.value.maxsize = 10
You'll need to contact IBM about this patch as it is not currently in a released fixpack.
Is your Websphere Portal Server having latest fix pack installed?
http://www-01.ibm.com/support/docview.wss?uid=swg24024380&rs=0&cs=utf-8&context=SSHRKX&dc=D420&loc=en_US&lang=en&cc=US
Also you may be interested in following discussion
http://www.ibm.com/developerworks/forums/thread.jspa?messageID=14427700&tstart=0
Update:
Just throwing some blind folded darts.
"RES#" to me sounds like resource.
From the forum stack trace,
"DefaultActionResultManager.storeDocument"
indicates it is storing the document.
Hence looks like your resources(generated portal pages) are being cached. Check if there is some paramater that can lmit cache size of resource.
Also in another test set cache expiration to 5 minutes instead of an hour.