How do we start a thread from a servlet? - java

What's the recommended way of starting a thread from a servlet?
Example: One user posts a new chat message to a game room. I want to send a push notification to all other players connected to the room, but it doesn't have to happen synchronously. Something like:
public MyChatServlet extends HttpServlet {
protected void doPost(HttpServletRequest request,
HttpServletResponse response)
{
// Update the database with the new chat message.
final String msg = ...;
putMsgInDatabaseForGameroom(msg);
// Now spawn a thread which will deal with communicating
// with apple's apns service, this can be done async.
new Thread() {
public void run() {
talkToApple(msg);
someOtherUnimportantStuff(msg);
}
}.start();
// We can send a reply back to the caller now.
// ...
}
}
I'm using Jetty, but I don't know if the web container really matters in this case.
Thanks

What's the recommended way of starting a thread from a servlet?
You should be very careful when writing the threading program in servlet.
Because it may causes errors (like memory leaks or missing synchronization) can cause bugs that are very hard to reproduce,
or bring down the whole server.
You can start the thread by using start() method.
As per my knowledge , I would recommend startAsync (servlet 3.0).
I got some helpful link for you Click.
but I don't know if the web container really matters in this case.
Yes it matters.Most webservers (Java and otherwise, including JBoss) follow a "one thread per request" model, i.e. each HTTP request is fully processed by exactly one thread.
This thread will often spend most of the time waiting for things like DB requests. The web container will create new threads as necessary.
Hope it will help you.

I would use a ThreadPoolExecutor and submit the tasks to it. The executor can be configured with a fixed/varying number of threads, and with a work queue that can be bounded or not.
The advantages:
The total number of threads (as well as the queue size) can be bounded, so you have good control on resource consumption.
Threads are pooled, eliminating the overhead of thread starting per request
You can choose a task rejection policy (Occurs when the pool is at full capacity)
You can easily monitor the load on the pool
The executor mechanism supports convenient ways of tracking the asynchronous operation (using Future)

In general that is the way. You can start any thread anywhere in a servlet web application.
But in particulary, you should protect your JVM from starting too much threads on any HTTP request. Someone may request a lot ( or very very much ) and propably at some point your JVM will stop with out of memory or something similiar.
So better choice is to use one of the queues found in the java.util.concurrent package.

One option would be to use ExecutorService and it's implementations like ThreadPoolExecutor
, to re-use the pooled threads thus reducing the creation overhead.
You can use also JMS for queuing you tasks to be executed later.

Related

Thread local behaviour in spring boot

As we know Tomcat has approx 200 threads and Jetty has some default count threads in their respective thread pools. So if we set something in a ThreadLocal per request, will it be there in the thread for life time or will Tomcat clear the ThreadLocal after each request.
If we set something in userContext in a filter do we need to clear it every time the filter exits?
Or will the web server create a new thread every time, if we don't have a thread pool configuration?
public static final ThreadLocal<UserContextDto> userContext = new ThreadLocal<>();
Yes, you need to clear ThreadLocal. Tomcat won't clear ThreadLocals.
No, new thread is not created every time. A thread from the pool is used to serve a request, and returned back to pool once request is complete.
This not only applies to Tomcat, it applies to Jetty and Undertow as well. Thread creation for every request is expensive in terms of both resources and time.
No, Tomcat will not clear ThreadLocals that your code creates, which means they will remain and could pollute subsequent requests.
So whenever you create one, make sure you clear it out before that same request or whatever exits.
It should also be noted that subsequent requests - even using the identical URL - could well be executed in a totally different thread, so ThreadLocals are not a mechanism for saving state between requests. For this, something like SessionBeans could be used.
If you put something in a ThreadLocal in a Thread that is not 100% under your control (i.e. one in which you are invoked from other code, like for a HTTP request), you need to clear whatever you set before you leave your code.
A try/finally structure is a good way to do that.
A threadpool can't do it for you, because the Java API does not provide a way to clear a thread's ThreadLocal variables. (Which is arguably a shortcoming in the Java API)
Not doing so risks a memory leak, although it's bounded by the size of the thread pool if you have one.
Once the same thread gets assigned again to the code that knows about the ThreadLocal, you'll see the old value from the previous request if you didn't remove it. It's not good to depend on that. It could lead to hard to trace bugs, security holes, etc.

Long Polling in Spring

We have a somewhat unique case where we need to interface with an outside API that requires us to long poll their endpoint for what they call real time events.
The thing is we may have as many as 80,000 people/devices hitting this endpoint at any given time, listening for events, 1 connection per device/person.
When a client makes a request from our Spring service to long poll for events, our service then in turn makes an async call to the outside API to long poll for events. The outside API has defined the minimum long poll timeout may be set to 180 seconds.
So here we have a situation where a thread pool with a queue will not work, because if we have a thread pool with something like (5 min, 10 max, 10 queue) then the 10 threads getting worked on may hog the spotlight and the 10 in queue will not get a chance until one of the current 10 are done.
We need a serve it or fail it (we will put load balancers etc. behind it), but we don't want to leave a client hanging without actual polling happening.
We have been looking into using DeferredResult for this, and returning that from the controller.
Something to the tune of
#RequestMapping(value = "test/deferredResult", method = RequestMethod.GET)
DeferredResult<ResponseEntity> testDeferredResult() {
final DeferredResult<ResponseEntity> deferredResult = new DeferredResult<ResponseEntity>();
CompletableFuture.supplyAsync(() -> testService.test()).whenCompleteAsync((result, throwable) -> deferredResult.setResult(result));
return deferredResult;
}
I am questioning if I am on the right path, and also should I provide an executor and what kind of executor (and configuration) to the CompletableFuture.supplyAsync() method to best accomplish our task.
I have read various articles, posts, and such and am wanting to see if anyone has any knowledge that might help our specific situation.
The problem you are describing does not sound like one that can be solved nicely if you are using blocking IO. So you are on the right path, because DeferredResult allows you to produce the result using any thread, without blocking the servlet-container thread.
With regards to calling a long-pooling API upstream, you need a NIO solution as well. If you use a Netty client, you can manage several thousand sockets using a single thread. When the NIO selector in Netty detects data, you will get a channel callback and eventually delegate to a thread in the Netty worker thread pool, and you can call deferredResult.setResult. If you don't do blocking IO the worker pool is usually sized after the number of CPU-cores, otherwise you may need more threads.
There are still a number of challenges.
You probably need more than one server (or network interface) since there are only 65K ports.
Sockets in Java does not have write timeouts, so if a client refuses to read data from the socket, and you send more data than your socket buffer, you would block the Netty worker thread(s) and then everything would stop (reverse slow loris attack). This is a classic problem in large async setups, and one of the reasons for using frameworks like Hystrix (by Netflix).

How to use Multithreading concept in Spring MVC for subsequent operations

I would like to send an email and update activity logs after updating profile successfully in my web application. For sending mails and updating activity logs, I would like to use thread so that the profile update response can be sent back to the client immediately and the subsequents operations can be taken care by threads. Please suggest an implementation.
There are numerous ways to achieve this, the fact that it's a Spring MVC application is almost irrelevant.
If you're using Java 8 then you can simply call upon the executor service to give you a thread from its pool:
String emailAddress = //get email address...
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.submit(() -> {
emailService.sendNotification(emailAddress);
});
Pre-Java 8:
final String emailAddress = "";
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
emailService.sendNotification(emailAddress);
}
});
thread.start();
If you're creating a more complex application then you should look into possibly using a message queue (ActiveMQ is good). This allows you more control and visibility and scales well as you add more asynchronous tasks, it also means you won't starve your application server of threads if there are lots of registrations at the same time.
You can use a BlockingQueue and implement a producer-consumer model to solve the problem. Your existing program acts as a producer, which adds a token into the BlockingQueue and an executor (which is created out of Executors.newFixedThreadpool) can do all your subsequent operations. You can refer the Javadocs and create your Spring context (as XML or annotations).
Also you can refer CompletionSerive
Spawning a thread to send and email as and when a profile is saved is not a good idea. Because it might result in too many threads and context switching might cause delay in completion. Hence the suggestion to use fixed thread pool.
A JMS queue can be used. But it looks like an overkill for the given scenario. Hence the suggestion to use BlockingQueue.

Multithread GAE servlets to handle concurrent users

I'd like to multithread my GAE servlets so that the same servlet on the same instance can handle up to 10 (on frontend instance I believe the max # threads is 10) concurrent requests from different users at the same time, timeslicing between each of them.
public class MyServlet implements HttpServlet {
private Executor executor;
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
if(executor == null) {
ThreadFactory threadFactory = ThreadManager.currentRequestFactory();
executor = Executors.newCachedThreadPoolthreadFactory);
}
MyResult result = executor.submit(new MyTask(request));
writeResponseAndReturn(response, result);
}
}
So basically when GAE starts up, the first time it gets a request to this servlet, an Executor is created and then saved. Then each new servlet request uses that executor to spawn a new thread. Obviously everything inside MyTask must be thread-safe.
What I'm concerned about is whether or not this truly does what I'm hoping it does. That is, does this code create a non-blocking servlet that can handle multiple requests from multiple users at the same time? If not, why and what do I need to do to fix it? And, in general, is there anything else that a GAE maestro can spot that is dead wrong? Thanks in advance.
I don't think your code would work.
The doGet method is running in threads managed by the servlet container. When a request comes in, a servlet thread is occupied, and it will not be released until doGet method return. In your code, the executor.submit would return a Future object. To get the actual result you need to invoke get method on the Future object, and it would block until the MyTask finishes its task. Only after that, doGet method returns and new requests can kick in.
I am not familiar with GAE, but according to their docs, you can declare your servlet as thread-safe and then the container will dispatch multiple requests to each web server in parallel:
<!-- in appengine-web.xml -->
<threadsafe>true</threadsafe>
You implicitly asked two questions, so let me answer both:
1. How can I get my AppEngine Instance to handle multiple concurrent requests?
You really only need to do two things:
Add the statement <threadsafe>true</threadsafe> to your appengine-web.xml file, which you can find in the war\WEB-INF folder.
Make sure that the code inside all your request handlers is actually thread-safe, i.e. use only local variables in your doGet(...), doPost(...), etc. methods or make sure you synchronize all access to class or global variables.
This will tell the AppEngine instance server framework that your code is thread-safe and that you are allowing it to call all of your request handlers multiple times in different threads to handle several requests at the same time. Note: AFAIK, It is not possible to set this one a per-servlet basis. So, ALL your servlets need to be thread-safe!
So, in essence, the executor-code you posted is already included in the server code of each AppEngine instance, and actually calls your doGet(...) method from inside the run method of a separate thread that AppEngine creates (or reuses) for each request. Basically doGet() already is your MyTask().
The relevant part of the Docs is here (although it doesn't really say much): https://developers.google.com/appengine/docs/java/config/appconfig#Using_Concurrent_Requests
2. Is the posted code useful for this (or any other) purpose?
AppEngine in its current form does not allow you to create and use your own threads to accept requests. It only allows you to create threads inside your doGet(...) handler, using the currentRequestThreadFactory() method you mentioned, but only to do parallel processing for this one request and not to accept a second one in parallel (this happens outside doGet()).
The name currentRequestThreadFactory() might be a little misleading here. It does not mean that it will return the current Factory of RequestThreads, i.e. threads that handle requests. It means that it returns a Factory that can create Threads inside the currentRequest. So, unfortunately it is actually not even allowed to use the returned ThreadFactory beyond the scope of the current doGet() execution, like you are suggesting by creating an Executor based on it and keeping it around in a class variable.
For frontend instances, any threads you create inside a doGet() call will get terminated immediately when your doGet() method returns. For backend instances, you are allowed to create threads that keep running, but since you are not allowed to open server sockets for accepting requests inside these threads, these will still not allow you to manage the request handling yourself.
You can find more details on what you can and cannot do inside an appengine servlet here:
The Java Servlet Environment - The Sandbox (specifically the Threads section)
For completeness, let's see how your code can be made "legal":
The following should work, but it won't make a difference in terms of your code being able to handle multiple requests in parallel. That will be determined solely by the <threadsafe>true</threadsafe> setting in you appengine-web.xml. So, technically, this code is just really inefficient and splits an essentially linear program flow across two threads. But here it is anyways:
public class MyServlet implements HttpServlet {
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
ThreadFactory threadFactory = ThreadManager.currentRequestThreadFactory();
Executor executor = Executors.newCachedThreadPool(threadFactory);
Future<MyResult> result = executor.submit(new MyTask(request)); // Fires off request handling in a separate thread
writeResponse(response, result.get()); // Waits for thread to complete and builds response. After that, doGet() returns
}
}
Since you are already inside a separate thread that is specific to the request you are currently handling, you should definitely save yourself the "thread inside a thread" and simply do this instead:
public class MyServlet implements HttpServlet {
#Override
public void doGet(HttpServletRequest request, HttpServletResponse response) {
writeResponse(response, new MyTask(request).call()); // Delegate request handling to MyTask object in current thread and write out returned response
}
}
Or, even better, just move the code from MyTask.call() into the doGet() method. ;)
Aside - Regarding the limit of 10 simultaneous servlet threads you mentioned:
This is a (temporary) design-decision that allows Google to control the load on their servers more easily (specifically the memory use of servlets).
You can find more discussion on those issues here:
Issue 7927: Allow configurable limit of concurrent requests per instance
Dynamic Backend Instance Scaling
If your bill shoots up due to increased latency, you may not be refunded the charges incurred
This topic has been bugging the heck out of me, too, since I am a strong believer in ultra-lean servlet code, so my usual servlets could easily handle hundreds, if not thousands, of concurrent requests. Having to pay for more instances due to this arbitrary limit of 10 threads per instance is a little annoying to me to say the least. But reading over the links I posted above, it sounds like they are aware of this and are working on a better solution. So, let's see what announcements Google I/O 2013 will bring in May... :)
I second the assessments of ericson and Markus A.
If however, for some reason (or for some other scenario) you want to follow the path that uses your code snippet as a starting point, I'd suggest that you change your executor definition to:
private static Executor executor;
so that it becomes static across instances.

Logic for controll concurrent in block/method

1)My environment is web application, I develop servlet to receive request.
A) In some block/method i want to control concurrent to not greater than 5
B) if there are 5 request in that block , the new coming must wait up to 60 second then throws error
C) if there are sleep/waiting request more then 30, the 31th request will be throwed an error
How I do this?
2)(Optional Question) from above I have to distribute control logic to all clustered host.
I plan to use hazelcast to share the control logic (e.g. current counter)
I see they provide BlockingQueue & ExectorService but I have no idea how to use in my case.
Please recommend if you have idea.
For A take a look at this: http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/Semaphore.html
For B take a look at Object.wait() and Object.notify()
C should be easy if you have A and B.
The answers by #Roman and #David Soroko say how to do this within a servlet (as the OP asked).
However, this approach has the problem that tomcat has to allocate a thread to each request so that the they can participate in the queuing / timeout logic implemented by the servlet. Each of those threads uses memory and other resources. This does not scale well. And if you don't configure enough threads, requests will be either dropped by the tomcat request dispatcher or queued / timed out using different logic.
An alternative approach is to use a non-servlet architecture in the webserver; e.g. Grizzly and more specifically Grizzly Comet. This is a big topic, and frankly I don't know enough about it to go deeply into the implementation details.
EDIT - In the servlet model, every request is allocated to a single thread for its entire lifetime. For example, in a typical "server push" model, each active client has an outstanding HTTP request asking the server for more data. When new data arrives in the server, the server sends a response and the client immediately sends a new request. In the classic servlet implementation model, this means that the server has to have an request "in progress" ... and a thread ... for each active client, even though most of the threads are just waiting for data to arrive.
In a scalable architecture, you would detach the request from the thread so that the thread could be used for processing another request. Later (e.g. when the data "arrived" in the "server push" example), the request would be attached to a thread (possibly a different one) to continue processing. In Grizzly, I understand that this is done using an event-based processing model, but I imagine that you could also uses a coroutine-based model as well.
Try semaphors:
A counting semaphore. Conceptually, a
semaphore maintains a set of permits.
Each acquire() blocks if necessary
until a permit is available, and then
takes it. Each release() adds a
permit, potentially releasing a
blocking acquirer. However, no actual
permit objects are used; the Semaphore
just keeps a count of the number
available and acts accordingly.

Categories