Tomcat creating new thread for same session - java

I have a webapp that uses Stripes and the Apache Shiro library for security.
On my local Windows Tomcat 6.0.33 installation everything works fine. However, when I run the app on Tomcat 6.0.16 on Linux at my host DailyRazor, I can see that periodically Tomcat is spawning a new thread for the same user/session, and so the user is losing their credentials and being asked to login again.
I also noticed this on my dev box when running under Jetty.
I don't think it's an inactivity timeout issue as the hits I am giving the webapp are sequential, is there something in the Tomcat config that may be different, apart from the different minor versions?
Alternatively, is there an easy way to debug the session info (as it's not appearing in my urls)?

Just to make it clearer than if it was in a comment: Each HTTP request will be handled by an arbitrary thread. Tomcat (and other app servers) use a pool of threads, pick a thread from the pool, execute the request, and give back the thread to the pool.
The HTTP session is completely orthogonal to the threading: several requests from the same session may be handled by different threads. A thread executes requests from several sessions. There are typically much more parallel sessions than threads in the pool. And finally, you may very well have two threads executing two requests for the same session. That implies that the objects stored in the session should be thread-safe, or that a synchronization mechanisme should be used to access non-thread-safe objects stored in the session.
Moreover, multiple frames or tabs of a given browser share the same HTTP session. You'll have a different session if you start a different browser (Chrome in addition to Firefox, for example), or if you use a browser on another machine.

Related

All web containers occupied by one app that depends on 2nd app

We have an application on our WebSphere Application Server that calls a web service of a second application which is deployed on the same app server.
There are 100 available web containers (threads).
At times when there are many active users, application 1 allocates all available web container threads. When application 1 tries to call the web service (application 2) there are no free threads, so application 1 never finishes and therefore the whole system hangs.
How can I solve this? For example, is it possible to restrict the web container thread count per application? For example, I would only permit application 1 to use 50% of available threads.
A solution would be to add some code to application 1 that watches the count of requests being processed simultaneously. But I'd try to avoid that if possible, because I think this is very error prone. Earlier, we used the synchronized keyword. But that only allows 1 request at a time which caused even bigger problems.
It could be possible by defining separate transport chain and thread pool.
I dont have web console before me, so steps in rough order:
create separate thread pool for your soap service app
create separate web transport chain on new port e.g. 9045
associate that thread pool with transport chain
create new virtual host, with host alias *:9045
map your soap-service app to that port
If you will access app via 9045 port it will use your own separate thread pool for that.
Concerns:
if it is only local access (from one app to the other) then you just access it via localhost:9045 and you are good to go
if your soap service needs to be accessible ALSO from outside e.g. via plugin with the default https port (443), you would need to create different DNS hostname for that so it can be associadet with you soap sercvice app eg. soap-service.domain.com (and then you use that domain in the host alias instead of *. In that case plugin should use that 9045 port for transport also, but I dont have env at hand to verify that.
I hope I didnt complicate it too much. ;-)

Multiple RMI Connections from single JVM using Weblogic RMI over T3

I'm attempting to use JMeter with some custom samplers to load test a Java application that is normally accessed via Weblogic RMI over T3 from a Swing-based GUI. Intention is to load the application server and measure the response time of particular transactions by simulating many concurrent user connections/interactions (up to ~500).
I have implemented a couple of JMeter samplers that acquire a RMI connection to the server via a JNDI lookup and that works fine. However I've noticed that, even if I acquire two contexts on different threads using different credentials, only one T3 connection is opened.
Is there a way to effectively create multiple independent connections to the app server from within one JVM, or will I be forced to run one user per JVM?
App is running in WLS 11g, currently on Hotspot 32bit but will be moving to JRockit 64bit.
Thanks.
You are running up against RMI connection pooling. There are ways to turn it down, see the RMI Home Page and the Properties pages linked from them, but it's still an unrealistic test for other reasons such as port exhaustion on the client host. You should really look at using as many client hosts as possible with as many separate JVMs as possible.

Simultaneous access to Database in Web application

How can I know the average or exact number of users accessing database simultaneously in my Java EE web enterprise application? I would like to see if the "Connection pool setting" I set in the Glassfish application server is suitable for my web application or not. I need to correctly set the maximun number of connection in Connection Pool setting in Application Server. Recently, my application ran out of connections and threw exceptions when the client request for DB expires.
There are multiple ways.
One and easiest would be take help from your DBAs - they can tell you exactly how many connections are active from your webserver or the user id for connection pool at a given time.
If you want some excitement, you will have to JMX management extensions provided by glassfish. Listing 6 on this page - gives an example as to how to write a JMS based snippet to monitor a connection pool.
Finally, you must make sure that all connections are closed explicitly by a connection.close(); type of call in your application. In some cases, you need to close ResultSet as well.
Next is throttling your http thread pool to avoid too many concurrent access if your db connections are taking longer to close.

How to integrate memcached in a Servlet? Tomcat & memory leaks

Searching memcached java in google, the first result is Using Memcached with Java.
The guy (who calls himself Just some Random Asshole in the Internet!) proposes a Singleton based on net.spy.memcached. It basically creates 20 threads and connections by creating 20 instances of MemcachedClient. For every request it chooses one at random.
However those threads and connections are never closed and they pile up every time I hot swap the application during development (with warnings from Tomcat 7).
SEVERE: The web application [/MyAppName] appears to have started a thread named
[...] but has failed to stop it. This is very likely to create a memory leak.
By looking at MemcachedClient JavaDoc, I see a method called shutdown with the only description being "Shut down immediately." Shut down what? The client? The server? I suppose is the client, since it's in MemcachedClient and I suppose that this method would close the connection and terminate the thread. EDIT: yes, it shuts down the client.
Question 1 How to force the execution of cleanup code in Tomcat 7, before the application is hot swapped?
Question 2 Is this approach of using memcached (with cleanup code), correct or is better I start over in a different way?
I think creating 20 memcache clients is silly - that's like creating 20 separate copies of your DB connection pool. The idea with that client is that it multiplexes a variety of requests with asynch IO.
http://code.google.com/p/spymemcached/wiki/Optimizations
As far as shutting it down, simply call:
yourClient.shutdown() to shutdown immediately, or
yourClient.shutdown(3, TimeUnit.SECONDS) for example, to allow some time for a more graceful shutdown.
That could be called from your Servlet's .destroy method, or a context listener for your whole WAR.
I don't know anything about memcached, but you could probably write a custom context listener and put some kind of shutdown hook in the context listener so that when the context shutdown you could loop through the items in your singleton and shut them down.
It turned out that it was a bug of Java AWS SDK and was not related to memcached. Version 1.2.2 of Java AWS SDK has this bug fixed.

Tomcat's behaviour when two simultaneous requests come from same ip

When i try to run 2 wget commands simultaneously to my server (http://myserver), looks like tomcat allocates two threads to process them. But i believe when tomcat receives two simultaneously from same ip address, it will not create a new thread for processing the second request as it considers both the requests come from same session.
If i want to check if both the threads are same or different, is using thread.getId() the only way? I think this id may be reused for new threads. Is there any unique property of the thread existing to check its identity other than threadid?
I suggest to never rely on threads to identify their source. There are no Servlet spec guarantees about threads, and newer Servlet spec implementations make use of NIO. You are skating on a thin ice.
Web servers will almost always assign multiple threads (or processes) to multiple simultaneous requests, since the client can work faster when it does not have to wait for each response.
Newer servers may use asynchronous IO (nio), however, and a single thread can simultaneously serve many clients.
Yes, Thread.getId() is a way of identifying threads.
Session IDs are the mechanism used to identify requests from a single client.
The IP address is not a good way to do that, since multiple machines can expose the same IP when hiding behind a NAT.
I believe Tomcat will always create a new thread of execution irrespective of whether it comes from the same IP or not. In case, the client application running on the particular IP has a mechanism to send across the session-id, then Tomcat will simply associate the same session context with the request thread [making it stateful].
in your case, you'll need to customise wget to hold on to the session-id [the Tomcat web-app might send it across through a cookie or as a url parameter - jsessionid]. wget will then need to send it back with the subsequent requests [url rewrite and include the jsessionid parameter, or exchange cookies]. this way Tomcat will be able to treat each request coming from a unique client instance and associate a state with it.

Categories