Can I configure my servlet container's thread management? - java

I'm currently working on an app which is heavily connected to maps. To display a map, we are generating a bunch of tiles in many threads, store them and get them if a user wants to see a certain part of the map.
The problem is, I'm naming threads that generate tiles a certain way, but then, when I want to get tiles to show a map, my servlet container's taking random threads from the pool, so the thread named for generating a tile ends up getting it from the storage. Of course, I could just rename the thread after generating a tile back, but I wonder if there is an alternative.
I wonder if I somehow can configure my servlet container for it to maybe kill threads after some time being idle or to create a new thread where I want to or to allocate several threads to work with this part of the code?
All I could find in terms of configuring servlet container is setting its min and max thread pool size, which I think won't help me.

The container is 100% in control over it's threading.
If you are attempting to manipulate the threading of the container then you are fighting a losing battle.
It is not possible to safely kill or stop threads on a running container, as this is incredibly unsafe, and will lead to many memory issues (leaks) and unclosed resources. The Thread.stop() method has been deprecated since Java 1.2.
Now that we have the negatives out of the way ...
Jetty is a 100% Async Java Web Server.
The classic assumption that 1 request uses 1 thread is wrong. (if you want this kind of behavior, then you should use Jetty 6 or older. Jetty versions older than 9.2 are now all EOL / End of Life)
When you use a Servlet call that is traditionally a blocking call, the Jetty server has to fake that blocking call to satisfy the API contract.
Even if using old school / traditional blocking Servlet APIs you'll still experience many situations where that 1 request has been handled by multiple threads over the lifetime of that 1 request.
If you want to work with the Servlet API and it's container then the first thing you should do is start to use both the Servlet Async Processing APIs and Servlet Async I/O APIs combined. Make sure you read about the gotchas on both APIs!
Async Processing will allow you to handle more processing of requests on the server side, not use the container threads that heavily, allow more control over how the threading behaves, will grant you better control over request timeouts, and even get notified of request/response error cases that you will always deal with on a web server.
Async I/O will allow you to only use a thread if there is content from the request/connection to read or if the connection allows a write. That connection will not consume a thread unless I/O is possible. This means more connections/requests per server, and ill behaving clients (slow, dead, problematic, etc) will not impact the behavior of your other clients by consuming threads that are not doing anything productive for you.
If you don't want to work with the Servlet API and do things your own way, then you'll have to manage your own Executor / ThreadGroup / ThreadPool that the server is unaware of. But that still means you'll need to use the Servlet Async Processing APIs to allow the 2 to coexist in harmony (you'll need to use the AsyncContext to inform the container that you are now taking control over the processing of the request, and then later inform it via the AsyncContext that you are done and the request is complete).
The biggest gotcha with this approach is that you cannot safely write to the HttpServletResponse from a thread that the container wasn't in control over.
Meaning the container dispatched on a thread to your application, that thread is the only one that can safely use the HttpServletResponse to write the response. You can have a different thread do the processing, a different thread provide the data to the HttpServletResponse, even a different thread that pumps the dispatch thread with content. But that thread you were dispatched to needs to be used to write.
This is the mixed threading behavior gotcha in the servlet spec. (you are in servlet async mode, on a different thread to process, but not using async mode to read/write.) It's a terribly complex, and ill defined, behavior in the servlet spec that leads to many issues, and I advise you to not chase this path.
This gotcha goes away if you also use the Servlet Async I/O APIs, but at that point the difference in the two above choices is negligible.

Related

Are java servlet containers capable of asynchronous I/O processing without additional threads?

For frameworks like Node.js and ASP.NET Core, they are capable of processing requests asynchronously for I/O tasks without creating additional threads. Are java servlet containers also capable of doing this? If not, do java servlet containers wait I/O tasks in the thread until the request is fully processed?
Okay, I found the answer myself.
According to the Jakarta EE 9 documentation,:
There are two common scenarios in which a thread associated with a request can be sitting idle.
The thread needs to wait for a resource to become available or process data before building the response. For example, an application may need to query a database or access data from a remote web service before generating the response.
The thread needs to wait for an event before generating the response. For example, an application may have to wait for a Jakarta Messaging message, new information from another client, or new data available in a queue before generating the response.
These scenarios represent blocking operations that limit the scalability of web applications. Asynchronous processing refers to assigning these blocking operations to a new thread and retuning the thread associated with the request immediately to the container.
So, java servlet containers are capable of asynchronous processing. However they will create new thread for both I/O and CPU bond tasks, which is not the same model as Node.js and ASP.NET Core.

A single servlet for all server request types vs multiple servlets -- one for each request type

Suppose on a high-traffic web-server, there are different types of requests from the client side. Eg.: user-requests vs internal/administrative types.
And, among the user-requests,
there's those you want to serve more promptly
(because they are more time-critical, are more frequent, etc).
The single servlet to handle these requests is "light"-- it sees
what each request is about and invokes right away the back-end process to handle it.
So, if you'd like to prioritize these requests, you prioritize these back-end processes on the server-- give them more CPU time, allocate them multiple server instances, etc.
The question here is: whether doing the same thing
to the servlets as well as these backend processes is an issue or not.
I'm aware that the servlet container (Tomcat in this case)
has some mechanisms-- although I don't know what/how exactly.
On one side of this discussion-- yes: code different servlets
for different client requests so that you can
manage their priorities/execution time at the server level.
On the other side-- no, not at all:
The servlet(s) are handling the requests and dispatching them
to the corresponding processes without burning execution time.
It's the back-end processes that are time-critical.
In fact, this is exactly what Spring is doing--
has the DispatcherServlet as the front-controller for all incoming requests.
A single servlet as the front-controller for all requests is the sound architecture.
This discussion came up few days back. up until then,
i was on the "no" side-- the paragraph right above.
however, i'm not as clear right now.
I'm wondering what would be a sound counter-argument to the claim that
"managing the priorities of the servlets for their types
improves the time performance on serving the client requests."
TIA.
//==================================================
EDIT:
If the case "yes" above,then how does Spring tell
the servlet container about the different types of requests so that the s.container can prioritize them?
I don't think the requests prioritization will have a huge effect on thread time execution unless were talking about a huge traffic like millions of thread on a single web server.But if that is what you want you can configure tomcat to prioritize threads. Tomcat allows you to specify the priority of each thread in an executor's thread pool: tomcat thread pool

How to properly implement RabbitMQ RPC from Java servlet web container?

I'd like for incoming Java servlet web requests to invoke RabbitMQ using the RPC approach as described here.
However, I'm not sure how to properly reuse callback queues between requests, as per the RabbitMQ tutorial linked above creating a new callback queue per every request is inefficient (RabbitMQ may not cope even if using the Queue TTL feature).
There would generally be only 1-2 RPC calls per every servlet request, but obviously a lot of servlet requests per second.
I don't think I can share the callback queues between threads, so I'd want at least one per each web worker thread.
My first idea was to store the callback queue in a ThreadLocal, but that can lead to memory leaks.
My second idea was to store them in a session, but I am not sure they will serialize properly and my sessions are currently not replicated/shared between web servers, so it is IMHO not a good solution.
My infrastructure is Tomcat / Guice / Stripes Framework.
Any ideas what the most robust/simple solution is?
Am I missing something in this whole approach, and thus over-complicating things?
Note 1- This question relates to the overall business case described here - see option 1.
Note 2 - There is a seemingly related question How to setup RabbitMQ RPC in a web context, but it is mostly concerned with proper shutdown of threads created by the RabbitMQ client.

Does synchronous servlet processing make sense for a distributed server-side application

The scope/context of this question:
I am to develop a Java/Java EE based distributed server-side application that is scalable (scale-up, rather than scale-out).
My application comprises of servlets utilizing multiple instances of distributed back-end services for processing client requests. If I need to achieve more throughput, I want to be able to just add more instances of these distributed services (JVMs on the same or another machine) and (expect to) see an increase in throughput.
To achieve this, I was thinking of a loosely-coupled asynchronous system.
I thought I would use Async Servlets (servlet 3.0) and an application-managed thread-pool that places client requests on JMS queues, which would be picked by one of the distributed service instances and processed. The responses can be relayed back to the client using JMS, from the service instances to a response-thread in the servlet container.
However, an asynchronous system seems to be (obviously) more complex than a synchronous one (ex: error-handling and error-relaying to the client, request tracking etc). I am also worried about the future maintainability of the design/code.
So, a question arises Does it make sense to do this synchronously, while still remaining distributed, scalable and loosely-coupled ?
If the answer is yes, then pls also share possible ways of achieving this (while remaining 'constructive').
If I can do this well in a synchronous way, then it will simplify the entire system.
I dont want to add complexity to the system unnecessarily.
(Assuming it makes sense) One possible implementation I could think of is using RMI.
For ex: A service registry for the distributed service instances to register and have a load-balancer distribute the RMI calls across all the available instances. But it feels to be a old-generation solution. Are there any better options available ?
Edit:
Other details about the scope of this question:
The client-side is browser-based does not demand an asynchronous
server-side.
I dont need server-push.
At any time, I wont have more outstanding requests than max-worker-threads of the popular web servers (even Apache).
For the above reasons, the use-cases mentioned in a related question dont seem to apply to my scenario.
Loose coupling and distribution are independent of whether processing is synchronous or asynchronous.
With scalability, the matter is more complex. In a synchronous model, you will need one thread per pending request. If you need to scale to really high load (say, thousands of concurrent requests per server), an asynchronous model may scale better. To reap the benefit of that however, the entire processing, starting from the handling of incoming connections, needs to be done in an asynchronous way. There is little point to have a synchronous request processing thread delegate to a asynchronous thread pool, and blocking until that thread pool has computed the result - after all, the request thread could just as well have done the work himself.
If you need to return a response, I'd therefore go for synchronous request processing whenever scalabity permits (which it usually does).
Edit:
There are numerous ways to talk to the distributed backend servers. You might simply use EJB (which, if I recall correctly, uses RMI under the hood). Or, you might use webservices behind a load balancer.

stopping/canceling disconnected GET request threads as soon as possible

I am using jetty, version 7.0.1 if that matters.
Sometimes I have some quite long running tasks on a server which I would like to cancel/stop if the client disconnects (in case of GET requests, not e.g. POST file uploads). It seems this is not the case, and that tasks continue to run to
completion.
Perhaps I can use ServletRequestListener.requestDestoryed listener to get notification of such tasks but what is recommended
approach for stoping the request thread? What about releasing resources like database connections, file handles or running tasks
(executor service)?
What is the recommended approach in stopping such tasks as soon as possible?
first I would recommend updating to the latest versions of jetty, we have fixed a ton since 7.0 series
second, your best bet to solve this problem is by design using either jetty-continuations to get async servlet support with servlet 2.5 spec (which is jetty7) or update to servlet 3.0 (jetty 8) and not rely on the get methods of the servlet api to block waiting for a response to send. Instead process the request and then spawn a thread or use an executor future to process the actions, then calling back to the request when you have a payload or success message to return. Reason being that while your in the servlet api blocking on the request process you are consuming resources and threads from your servlet thread pool...you'll be able to scale up much cleaner by using continuations or the async servlets of 3.0...
Also you'll be able to design a proper mechanism for managing these threads and things like timeouts and the proper notification mechanism for exceptional conditions, and it will be testable outside of a servlet container that way.
imo at least :)

Categories