JSP Servlet Multithreading Qustion - java

I have a Servlet which gets a request from a client, then the Servlet gathers data from 5 different Servers via http request/response (every Server needs 1sec to respond) and sends the data back to the client.
The Problem is that it is too long when the Client has to wait 6 seconds for the response.
So the 5 requests to the 5 Servers must be sent at the same time.
Ideas:
Multithreading in the Servlet, like in a normal Java Application.
An own Servlet for every Server(request), so that 1 main servlet says to the 5 Gather Servlets "get the data xy" and the gather Servlets send the data to the main servlet and the main servlet back to the client.
The Problem i fear about is, that a thread/servlet gets the response from an another request because its the same time and same ip.
How to solve this? Thanks!

Multithreading in the Servlet
You can use ServletRequest#startAsync() method that puts this request into asynchronous mode, and initializes its AsyncContext with the original (unwrapped) ServletRequest and ServletResponse objects.
Read more about Servlet 3 0 final-spec - Section 2.3.3.3 - Asynchronous processing where it is explained in detail.
AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously.
Read more about Executors.newFixedThreadPool() that creates a thread pool that reuses a fixed number of threads operating off a shared unbounded queue. At any point, at most nThreads threads will be active processing tasks. If additional tasks are submitted when all threads are active, they will wait in the queue until a thread is available.
Please have a look at ExecutorService to read more about it along with sample code.
Read more...

Related

Tomcat Thread Model - Does a thread in Thread per request model handle all work related to that request?

I understand that the Servlet Containers will use "Thread per request" model, but my question is, will the thread handling the request do all the below steps ?
Obtain thread from pool to handle request and and pass http request and http response objects to Servlet service method.
Invoke service/dao/ logic which could potentially involve delay since I/O operation is done in DB.
Return the Http response
Return the thread to the Container Thread pool
My main questions is, if somehow the I/O operation on step 2 takes a huge amount of time, will the Servlet container run out of threads from the pool ? Or does the Container use one thread/threads just to handle the request and then delegates the work to another thread to do the rest of the work ? Also I heard that nowadays they are changing the model to a Threaded Model with NIO operations? Thank you.
will the same thread be used for everything ?
TL;DR - No.
Once the Servlet Container (Catalina) spins up the thread per request, that thread is deallocated/exited right after that request-response cycle is finished (that is, corresponding HTTP request handler Servlet method returns).
If your service (DAO/logic/whatever) layer will block the thread, which eventually blocks the web layer (doGet(), doPost() or etc.), browser will go idle, awaiting the response (time is either default or configured), and Catalina (Servlet Container) will block that thread only (other requests may arrive successfully);
I/O (or to be specific Request-Response) timeout will be either default (which is 60 seconds, but it depends on the Tomcat version), or configured by yourself;
Design of the architecture, to delegate discrete incoming HTTP Message to separate threads, has a sole and simple purpose - to process the Request-Response cycles in isolation.
Head First Servlets & JSP:
The Container automatically creates a new Java thread for every servlet request it receives. When the servlet’s done running the HTTP service method for that client’s request, the thread completes (i.e. dies).
Update to your updated question
my question is, will the thread handling the request do all the below steps?
TL;DR - No again.
Servlet Objects live in container, which is a completely separate thread.
When the HTTP message (request, in this case) hits the Servlet-mapped endpoint, this happens:
Servlet Container creates HttpServletResponse and HttpServletRequest objects;
Container allocates(creates) a new thread for that request and response objects (Important: in order to isolate client-server communication.);
Container then passes those request and response objects to the servlet thread;
Container then calls the Servlet API's service() method and depending on what is the type of incoming message (GET, POST, etc.), it invokes corresponding method (doGet(); doPost(); etc.);
Container DOES NOT CARE whatever levels or layers of architecture you have - DAO, Repository, Service, Cherry, Apple or whatever. It will wait until the corresponding HTTP request handler method finishes (accordingly, if something blocks it, container will block that thread);
When the handler method returns; thread is deallocated.
Answering your further questions
My main questions is, if somehow the I/O operation on step 2 takes a huge amount of time, will the Servlet container run out of threads from the pool ?
Theoretically it can; however, that means, that it should block all the 200 threads at the same time and this time (if the default configuration is maintained) it will not accept any other requests (until some thread deallocates).
This, however, can be configured with maxThreads attribute and you can choose what should be the threshold number of request processing threads allowed in Tomcat.
Or does the Container use one thread/threads just to handle the request and then delegates the work to another thread to do the rest of the work?
We have answered this above.
Also I heard that nowadays they are changing the model to a Threaded Model with NIO operations?
NIO specific configuration and it can facilitate poller threads, which are used to simultaneously handle multiple connections per thread; however, this is a big and completely different topic. For the further reading, have a look a this and this.
PLEASE, make sure that your future posts are not too broad, containing 10 different questions in a single post.

How to get http responses from multiple requests in the order they're recieved

In my application, there are roughly 15 threads that each send an http request to an api endpoint once every 15 seconds; meaning about 1 request a second. These threads should be running indefinitely and only need to be created once. I am unsure how to continuously receive the responses on the main thread so that they can be parsed and dealt with. In trying to research this problem I found several frameworks that look like they could help; ScheduledExecutorService, NIO, Grizzly, AHC. But, I'm paralyzed by the amount of options and am unsure of what to implement.
My main goal is, for each of the 15 requests, to have the request sent off on its own every 15 seconds and receive the response on the main thread as it comes in.
No special frameworks are required for such a simple task. Just create an instance of BlockingQueue (ArrayBlockingQueue looks like the best choice). Each network thread calls queue.put(response) and the main thread makes response=queue.take() in a loop.

Async HTTP request vs HTTP requests on new thread

I have 2 microservices (A and B).
A has an endpoint which accepts POST requests. When users make a POST request, this happens:
Service A takes the object from the POST request body and stores it in a database.
Service A converts the object to a different object. And the new object gets sent to service B via Jersey HTTP client.
Step 2 takes place on a Java thread pool I have created (Executors.newCachedThreadPool). By doing step 2 on a new thread, the response time of service A's endpoint is not affected.
However, if service B is taking long to respond, service A can potentially create too many threads when it is receiving many POST requests. To help fix this, I can use a fixed thread pool (Exectuors.newFixedThreadPool).
In addition to the fixed thread pool, should I also use an asynchronous non-blocking HTTP client? Such as the one here: https://hc.apache.org/httpcomponents-asyncclient-dev/. The Jersey HTTP client that I use is blocking.
It seems like it is right to use the async HTTP client. But if I switch to a fixed thread pool, I think the async HTTP client won't provide a significant benefit - am I wrong in thinking this?
Even if you use fixed thread pool all your threads in it will be blocked on step 2 meaning that they won't do any meaningful job - just wait for your API to return a response which is not a pragmatic resource management. In this case, you will be able to handle a limited amount of incoming requests since threads in the thread pool will be always busy instead of handling new requests.
In the case of a non-blocking client, you are blocking just one single thread (let's call it dispatcher thread) which is responsible for sending and waiting for all the requests/responses. It will be running in a "while loop" (you could call it an event loop) and check whether all the packages were received as a response so they are ready for worker threads to be picked up.
In the latter scenario, you get a larger amount of available threads ready to do some meaningful job, so your throughput will be increased.
The difference is that with sync client, step A thread will be doing a connection to step 2 endpoint and wait for a response. Making step 2 implementation async will and just return 200 directly (or whatever) will help on decreasing waiting time; but it will still be doing the connection and waiting for response.
With non-blocking client instead, the step A call itself will be done by another thread. So everything is untied from step A thread. Also, system can make use of that thread for other stuff until it gets a response from step B and needs to resume work.
The idea is that your origin threads will not be idle so much time waiting for responses, but instead being reused to do other work while in between.
The reason to use a non-blocking HTTP-Client is to prevent too much CPU from being used on thread-switching. If you already solve that problem by limiting the amount of background threads, then non-blocking IO won't provide any noticeable benefits.
There is another problem with your setup: it is very vulnerable to DDOS attacks (intentional or accidental ones). If a someone calls your service very often, it will internally create a huge work-load that will keep the service busy for a long time. You will definitely need to limit the background task queue (which is a supported feature of the Executor class) and return 503 (or equivalent) if there are too many pending tasks.

multithreaded http client and tomcat 7

I am confused about how should I handle http sessions in a standalone java app. Here are the details :
The java client connects to 3 tomcat 7 servlets.
When the client boots up, it starts 2 scheduled threads /downloader and uploader/ polling 2 of the servlets every 3mins. They both retrieve and store the jsessionid cookie in private fields in their respective classes.
This results in 2 sessions in tomcat reused for the lifetime of the webapp. So far so good.
There is a 3rd service /connected to the 3rd servlet/ using multiple instances of a threaded "WebDispather" class which retrieves and stores the session similarly to the above mentioned threads but this time - in a private static field.
The dispatcher is heavily used, there might be as many as 150 instances running concurrently depending on the load. Dispatcher threads hit the servlet ever second or so.
Making the dispatcher sessionid field non static creates a session per instance - not good.
What are the implications of having all dispatcher threads bound to the same tomcat http session?
Thank you
EDIT:
although dispather threads a bound to same session the session itself doesn't hold any information.
Servlet processes only the request params.
I.e. dispatcher 1:
localhost/messagecontrol?id=123&state
Dispatcher thread 2:
localhost/messagecontrol?id=123&state=finished
//Servlet processes and forgets id and state
As far as I can see the implications are all client threads will share the same session information, if any information not meant to be shared this will be a bug on your code
IF you're worried about the number of threads created (performance-wise), consider implementing a thread pool in your code.

Logic for controll concurrent in block/method

1)My environment is web application, I develop servlet to receive request.
A) In some block/method i want to control concurrent to not greater than 5
B) if there are 5 request in that block , the new coming must wait up to 60 second then throws error
C) if there are sleep/waiting request more then 30, the 31th request will be throwed an error
How I do this?
2)(Optional Question) from above I have to distribute control logic to all clustered host.
I plan to use hazelcast to share the control logic (e.g. current counter)
I see they provide BlockingQueue & ExectorService but I have no idea how to use in my case.
Please recommend if you have idea.
For A take a look at this: http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Semaphore.html
For B take a look at Object.wait() and Object.notify()
C should be easy if you have A and B.
The answers by #Roman and #David Soroko say how to do this within a servlet (as the OP asked).
However, this approach has the problem that tomcat has to allocate a thread to each request so that the they can participate in the queuing / timeout logic implemented by the servlet. Each of those threads uses memory and other resources. This does not scale well. And if you don't configure enough threads, requests will be either dropped by the tomcat request dispatcher or queued / timed out using different logic.
An alternative approach is to use a non-servlet architecture in the webserver; e.g. Grizzly and more specifically Grizzly Comet. This is a big topic, and frankly I don't know enough about it to go deeply into the implementation details.
EDIT - In the servlet model, every request is allocated to a single thread for its entire lifetime. For example, in a typical "server push" model, each active client has an outstanding HTTP request asking the server for more data. When new data arrives in the server, the server sends a response and the client immediately sends a new request. In the classic servlet implementation model, this means that the server has to have an request "in progress" ... and a thread ... for each active client, even though most of the threads are just waiting for data to arrive.
In a scalable architecture, you would detach the request from the thread so that the thread could be used for processing another request. Later (e.g. when the data "arrived" in the "server push" example), the request would be attached to a thread (possibly a different one) to continue processing. In Grizzly, I understand that this is done using an event-based processing model, but I imagine that you could also uses a coroutine-based model as well.
Try semaphors:
A counting semaphore. Conceptually, a
semaphore maintains a set of permits.
Each acquire() blocks if necessary
until a permit is available, and then
takes it. Each release() adds a
permit, potentially releasing a
blocking acquirer. However, no actual
permit objects are used; the Semaphore
just keeps a count of the number
available and acts accordingly.

Categories