I am confused about how should I handle http sessions in a standalone java app. Here are the details :
The java client connects to 3 tomcat 7 servlets.
When the client boots up, it starts 2 scheduled threads /downloader and uploader/ polling 2 of the servlets every 3mins. They both retrieve and store the jsessionid cookie in private fields in their respective classes.
This results in 2 sessions in tomcat reused for the lifetime of the webapp. So far so good.
There is a 3rd service /connected to the 3rd servlet/ using multiple instances of a threaded "WebDispather" class which retrieves and stores the session similarly to the above mentioned threads but this time - in a private static field.
The dispatcher is heavily used, there might be as many as 150 instances running concurrently depending on the load. Dispatcher threads hit the servlet ever second or so.
Making the dispatcher sessionid field non static creates a session per instance - not good.
What are the implications of having all dispatcher threads bound to the same tomcat http session?
Thank you
EDIT:
although dispather threads a bound to same session the session itself doesn't hold any information.
Servlet processes only the request params.
I.e. dispatcher 1:
localhost/messagecontrol?id=123&state
Dispatcher thread 2:
localhost/messagecontrol?id=123&state=finished
//Servlet processes and forgets id and state
As far as I can see the implications are all client threads will share the same session information, if any information not meant to be shared this will be a bug on your code
IF you're worried about the number of threads created (performance-wise), consider implementing a thread pool in your code.
Related
I understand that the Servlet Containers will use "Thread per request" model, but my question is, will the thread handling the request do all the below steps ?
Obtain thread from pool to handle request and and pass http request and http response objects to Servlet service method.
Invoke service/dao/ logic which could potentially involve delay since I/O operation is done in DB.
Return the Http response
Return the thread to the Container Thread pool
My main questions is, if somehow the I/O operation on step 2 takes a huge amount of time, will the Servlet container run out of threads from the pool ? Or does the Container use one thread/threads just to handle the request and then delegates the work to another thread to do the rest of the work ? Also I heard that nowadays they are changing the model to a Threaded Model with NIO operations? Thank you.
will the same thread be used for everything ?
TL;DR - No.
Once the Servlet Container (Catalina) spins up the thread per request, that thread is deallocated/exited right after that request-response cycle is finished (that is, corresponding HTTP request handler Servlet method returns).
If your service (DAO/logic/whatever) layer will block the thread, which eventually blocks the web layer (doGet(), doPost() or etc.), browser will go idle, awaiting the response (time is either default or configured), and Catalina (Servlet Container) will block that thread only (other requests may arrive successfully);
I/O (or to be specific Request-Response) timeout will be either default (which is 60 seconds, but it depends on the Tomcat version), or configured by yourself;
Design of the architecture, to delegate discrete incoming HTTP Message to separate threads, has a sole and simple purpose - to process the Request-Response cycles in isolation.
Head First Servlets & JSP:
The Container automatically creates a new Java thread for every servlet request it receives. When the servlet’s done running the HTTP service method for that client’s request, the thread completes (i.e. dies).
Update to your updated question
my question is, will the thread handling the request do all the below steps?
TL;DR - No again.
Servlet Objects live in container, which is a completely separate thread.
When the HTTP message (request, in this case) hits the Servlet-mapped endpoint, this happens:
Servlet Container creates HttpServletResponse and HttpServletRequest objects;
Container allocates(creates) a new thread for that request and response objects (Important: in order to isolate client-server communication.);
Container then passes those request and response objects to the servlet thread;
Container then calls the Servlet API's service() method and depending on what is the type of incoming message (GET, POST, etc.), it invokes corresponding method (doGet(); doPost(); etc.);
Container DOES NOT CARE whatever levels or layers of architecture you have - DAO, Repository, Service, Cherry, Apple or whatever. It will wait until the corresponding HTTP request handler method finishes (accordingly, if something blocks it, container will block that thread);
When the handler method returns; thread is deallocated.
Answering your further questions
My main questions is, if somehow the I/O operation on step 2 takes a huge amount of time, will the Servlet container run out of threads from the pool ?
Theoretically it can; however, that means, that it should block all the 200 threads at the same time and this time (if the default configuration is maintained) it will not accept any other requests (until some thread deallocates).
This, however, can be configured with maxThreads attribute and you can choose what should be the threshold number of request processing threads allowed in Tomcat.
Or does the Container use one thread/threads just to handle the request and then delegates the work to another thread to do the rest of the work?
We have answered this above.
Also I heard that nowadays they are changing the model to a Threaded Model with NIO operations?
NIO specific configuration and it can facilitate poller threads, which are used to simultaneously handle multiple connections per thread; however, this is a big and completely different topic. For the further reading, have a look a this and this.
PLEASE, make sure that your future posts are not too broad, containing 10 different questions in a single post.
I've sent the same request to a Spring MVC project with two browsers, but I got the same threadlocal, so the instances in threadlocal are the same.,Why?
Threadlocal is bound to a thread / process, not to a session. JVM does not really know or care about the concept of web sessions, that's a higher level of abstraction.
It is well possible that two web requests with two sessions are handled by the same thread. Most servers use a pool of threads that they reuse rather than create a new thread for each request or even session. If the processing of the first request leaves something in the threadlocal after it's done processing the request, well, that's what the next request will find there.
Store the data you need to keep per-session in HttpServletRequest.getSession() instead.
How to limit by code the number of concurrent requests to web application, say to 3 requests ? Am I suppose to put each servlet class into a thread and create a global counter (by creating new class)?
How to limit by code the number of concurrent requests to web application, say to 3 requests ? Am I suppose to put each servlet class into a thread and create a global counter (by creating new class)?
You typically rely in the web container to limit the number of concurrent requests; e.g. by setting limit on the number of worker threads or connections ... in the web container configurations.
Apparently, if a Tomcat server gets more requests than it can handle, it will send generic 503 responses. For more information:
https://tomcat.apache.org/tomcat-8.5-doc/config/http.html - explains where / how the configs can be set
Tomcat responding HTTP 503 - gives an example of what would happen ...
But how can i display to the users that the wed reached its limit (like 3 requests)?
If you want to limit specific request types and display specific responses to the user, then you probably will need to implement this within each servlet, using a counter, etcetera.
But the problem with trying to do "nice" things when the server is overloaded is that doing the nice things tends to increase the load. This is particularly important:
when your server is grossly inadequate for the actual load from (real) users, or
when someone is DoS'ing you, either accidentally or deliberately.
I have a Servlet which gets a request from a client, then the Servlet gathers data from 5 different Servers via http request/response (every Server needs 1sec to respond) and sends the data back to the client.
The Problem is that it is too long when the Client has to wait 6 seconds for the response.
So the 5 requests to the 5 Servers must be sent at the same time.
Ideas:
Multithreading in the Servlet, like in a normal Java Application.
An own Servlet for every Server(request), so that 1 main servlet says to the 5 Gather Servlets "get the data xy" and the gather Servlets send the data to the main servlet and the main servlet back to the client.
The Problem i fear about is, that a thread/servlet gets the response from an another request because its the same time and same ip.
How to solve this? Thanks!
Multithreading in the Servlet
You can use ServletRequest#startAsync() method that puts this request into asynchronous mode, and initializes its AsyncContext with the original (unwrapped) ServletRequest and ServletResponse objects.
Read more about Servlet 3 0 final-spec - Section 2.3.3.3 - Asynchronous processing where it is explained in detail.
AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously.
Read more about Executors.newFixedThreadPool() that creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available.
Please have a look at ExecutorService to read more about it along with sample code.
Read more...
I know that every request is served by a servlet thread, but will it be possible for one user session, two request served by two different thread?
If the situation above really happens, what about thread local variable stored by first request-serving thread be read by second request-serving thread?
I'm afraid that if I store user credential in Spring Security's SecurityContextHolder(which uses thread local variable) in first thread, the second thread will not be able to access the user credential...
I know that every request is served by a servlet thread, but will it be possible for one user session, two request served by two different thread?
Yes, that's possible.
I'm afraid that if I store user credential in Spring Security's SecurityContextHolder(which uses thread local variable) in first thread, the second thread will not be able to access the user credential...
Security is established separately for each request by Spring, you do not have to handle this yourself.
No, one request will not be served by several threads. What can really happen is serving of 2 requests by one thread. This is the reason that you should be very careful using thread local variables yourself. However you can trust Spring framework: it does things right. It can for example use session or request ID when using thread local, so 2 request being processed by one thread will not be confused.
Two separate requests of the same user are handled (most likely) by two different threads.
I am not sure what Spring does, but the Servlet api provides a way to retrieve data that is specific to the user session (how the server tracks the session is irrelevant, but have a look at cookies and url rewriting).
Now, If I wanted to have the user credentials on a threadlocal variable (which is not unusual, as the ThreadLocal pseudo-singleton is the most convenient way of injection I know), I would store them on the users HttpSession (which is persistent across all requests of the same user) and use a servlet filter to put them on the threadlocal at the beginning of each request.
I hope this makes things a bit clearer for you. I find it is better to know what's happening under the hood even when using the most up to date framework :)