Searching memcached java in google, the first result is Using Memcached with Java.
The guy (who calls himself Just some Random Asshole in the Internet!) proposes a Singleton based on net.spy.memcached. It basically creates 20 threads and connections by creating 20 instances of MemcachedClient. For every request it chooses one at random.
However those threads and connections are never closed and they pile up every time I hot swap the application during development (with warnings from Tomcat 7).
SEVERE: The web application [/MyAppName] appears to have started a thread named
[...] but has failed to stop it. This is very likely to create a memory leak.
By looking at MemcachedClient JavaDoc, I see a method called shutdown with the only description being "Shut down immediately." Shut down what? The client? The server? I suppose is the client, since it's in MemcachedClient and I suppose that this method would close the connection and terminate the thread. EDIT: yes, it shuts down the client.
Question 1 How to force the execution of cleanup code in Tomcat 7, before the application is hot swapped?
Question 2 Is this approach of using memcached (with cleanup code), correct or is better I start over in a different way?
I think creating 20 memcache clients is silly - that's like creating 20 separate copies of your DB connection pool. The idea with that client is that it multiplexes a variety of requests with asynch IO.
http://code.google.com/p/spymemcached/wiki/Optimizations
As far as shutting it down, simply call:
yourClient.shutdown() to shutdown immediately, or
yourClient.shutdown(3, TimeUnit.SECONDS) for example, to allow some time for a more graceful shutdown.
That could be called from your Servlet's .destroy method, or a context listener for your whole WAR.
I don't know anything about memcached, but you could probably write a custom context listener and put some kind of shutdown hook in the context listener so that when the context shutdown you could loop through the items in your singleton and shut them down.
It turned out that it was a bug of Java AWS SDK and was not related to memcached. Version 1.2.2 of Java AWS SDK has this bug fixed.
Related
I have created a spring boot web application and deployed war of the same to tomcat container.
The application connects to mongoDB using Async connections. I am using mongodb-driver-async library for that.
At startup everything works fine. But as soon as load increases, It shows following exception in DB connections:
org.springframework.web.context.request.async.AsyncRequestTimeoutException: null
at org.springframework.web.context.request.async.TimeoutDeferredResultProcessingInterceptor.handleTimeout(TimeoutDeferredResultProcessingInterceptor.java:42)
at org.springframework.web.context.request.async.DeferredResultInterceptorChain.triggerAfterTimeout(DeferredResultInterceptorChain.java:75)
at org.springframework.web.context.request.async.WebAsyncManager$5.run(WebAsyncManager.java:392)
at org.springframework.web.context.request.async.StandardServletAsyncWebRequest.onTimeout(StandardServletAsyncWebRequest.java:143)
at org.apache.catalina.core.AsyncListenerWrapper.fireOnTimeout(AsyncListenerWrapper.java:44)
at org.apache.catalina.core.AsyncContextImpl.timeout(AsyncContextImpl.java:131)
at org.apache.catalina.connector.CoyoteAdapter.asyncDispatch(CoyoteAdapter.java:157)
I am using following versions of software:
Spring boot -> 1.5.4.RELEASE
Tomcat (installed as standalone binary) -> apache-tomcat-8.5.37
Mongo DB version: v3.4.10
mongodb-driver-async: 3.4.2
As soon as I restart the tomcat service, everything starts working fine.
Please help, what could be the root cause of this issue.
P.S.: I am using DeferredResult and CompletableFuture to create Async REST API.
I have also tried using spring.mvc.async.request-timeout in application and configured asynTimeout in tomcat. But still getting same error.
It's probably obvious that Spring is timing out your requests and throwing AsyncRequestTimeoutException, which returns a 503 back to your client.
Now the question is, why is this happening? There are two possibilities.
These are legitimate timeouts. You mentioned that you only see the exceptions when the load on your server increases. So possibly your server just can't handle that load and its performance has degraded to the point where some requests can't complete before Spring times them out.
The timeouts are caused by your server failing to send a response to an asynchronous request due to a programming error, leaving the request open until Spring eventually times it out. It's easy for this to happen if your server doesn't handle exceptions well. If your server is synchronous, it's okay to be a little sloppy with exception handling because unhandled exceptions will propagate up to the server framework, which will send a response back to the client. But if you fail to handle an exception in some asynchronous code, that exception will be caught elsewhere (probably in some thread pool management code), and there's no way for that code to know that there's an asynchronous request waiting on the result of the operation that threw the exception.
It's hard to figure out what might be happening without knowing more about your application. But there are some things you could investigate.
First, try looking for resource exhaustion.
Is the garbage collector running all the time?
Are all CPUs pegged at 100%?
Is the OS swapping heavily?
If the database server is on a separate machine, is that machine showing signs of resource exhaustion?
How many connections are open to the database? If there is a connection pool, is it maxed out?
How many threads are running? If there are thread pools in the server, are they maxed out?
If something's at its limit then possibly it is the bottleneck that is causing your requests to time out.
Try setting spring.mvc.async.request-timeout to -1 and see what happens. Do you now get responses for every request, only slowly, or do some requests seem to hang forever? If it's the latter, that strongly suggests that there's a bug in your server that's causing it to lose track of requests and fail to send responses. (If setting spring.mvc.async.request-timeout appears to have no effect, then the next thing you should investigate is whether the mechanism you're using for setting the configuration actually works.)
A strategy that I've found useful in these cases is to generate a unique ID for each request and write the ID along with some contextual information every time the server either makes an asynchronous call or receives a response from an asynchronous call, and at various checkpoints within asynchronous handlers. If requests go missing, you can use the log information to figure out the request IDs and what the server was last doing with that request.
A similar strategy is to save each request ID into a map in which the value is an object that tracks when the request was started and what your server last did with that request. (In this case your server is updating this map at each checkpoint rather than, or in addition to, writing to the log.) You can set up a filter to generate the request IDs and maintain the map. If your filter sees the server send a 5xx response, you can log the last action for that request from the map.
Hope this helps!
Asynchroneus tasks are arranged in a queue(pool) which is processed in parallel depending on the number of threads allocated. Not all asynchroneus tasks are executed at the same time. Some of them are queued. In a such system getting AsyncRequestTimeoutException is normal behaviour.
If you are filling up the queues with asynchroneus tasks that are unable to execute under pressure. Increasing the timeout will only delay the problem. You should focus instead on the problem:
Reduce the execution time(through various optimizations) of asynchroneus task. This will relax the pooling of async tasks. It oviously requires coding.
Increase the number of CPUSs allocated in order to be able to run more efficiently the parallel tasks.
Increase the number of threads servicing the executor of the driver.
Mongo Async driver is using AsynchronousSocketChannel or Netty if Netty is found in the classpath. In order to increase the number of the worker threads servicing the async comunication you should use:
MongoClientSettings.builder()
.streamFactoryFactory(NettyStreamFactoryFactory(io.netty.channel.EventLoopGroup eventLoopGroup,
io.netty.buffer.ByteBufAllocator allocator))
.build();
where eventLoopGroup would be io.netty.channel.nio.NioEventLoopGroup(int nThreads))
on the NioEventLoopGroup you can set the number of threads servicing your async comunication
Read more about Netty configuration here https://mongodb.github.io/mongo-java-driver/3.2/driver-async/reference/connecting/connection-settings/
I created a webapplication that needs to do some cleanup on shutdown. This cleanup will take about a minute and its completely OK for it to do so.
When I deploy my webapp onto Tomcat 8 and then stop it, my ContextListener gets called and the cleanup begins. But it seems like Tomcat stops my thread the hard way and it won't complete anymore. At least on Tomcat 6 that wasn't an issue.
An ideas how to configure Tomcat 8 to stop from misbehaving?
Partial Answer:
I found out it has something to do with a performance optimization I did. I used startStopThreads="2" to start my applications in parallel, which works out well, but on shutdown this also seems to kill my threads.
If you have a task which is to be performed on shutdown, I would add this as shutdown hook. Most likely Tomcat 8 is called System.exit() which is a normal thing to do and this kills all user threads but start shutdown hooks.
A better solution is to never leave the system in a state where you really need this. i.e. you cannot assume an application will die gracefully.
if you are waiting for client to disconnect, I suggest you add a shutting down phase. During this phase you refuse new connections, move connections to another server or attempt to gracefully tell existing ones you are going away. After a short period or time out, you then shut down.
RabbitMQ RPC
I decided to use RabbitMQ RPC as described here.
My Setup
Incoming web requests (on Tomcat) will dispatch RPC requests over RabbitMQ to different services and assemble the results. I use one reply queue with one custom consumer that listens to all RPC responses and collects them with their correlation id in a simple hash map. Nothing fancy there.
This works great in a simple integration test on controller level.
Problem
When I try to do this in a web project deployed on Tomcat, Tomcat refuses to shut down. jstack and some debugging learned me a thread is spawn to listen for the RPC response and is blocking Tomcat from shutting down gracefully. I guess this is because the created thread is created on application level instead of request level and is not managed by Tomcat. When I set breakpoints in Servlet.destroy() or ServletContextListener.contextDestroyed(ServletContextEvent sce), they are not reached, so I see no way to manually clean things up.
Alternative
As an alternative, I could use a new reply queue (and simple QueueingConsumer) for each web request. I've tested this, it works and Tomcat shuts down as it should. But I'm wondering if this is the way to go.. Can a RabbitMQ cluster deal with thousands (or even millions) of short living queues/consumers? I can imagine queues aren't that big, but still.. constantly broadcasting to all cluster nodes.. the total memory footprint..
Question
So in short, is it wise do create a queue for each incoming web request or how should I setup RabbitMQ with one queue and consumer so Tomcat can shutdown gracefully?
I found a solution for my problem:
The Java client is creating his own threads. There is the possibility to add your own ExecutorService when creating a new connection. Doing so in the ServletContextListener.initialized() method, one can keep track of the ExecutorService and shut it down manually in the ServletContextListener.destroyed() method.
executorService.shutdown();
executorService.awaitTermination(20, TimeUnit.SECONDS);
I used Executors.newCachedThreadPool(); as the threads have many short executions, and they get cleaned up when being idle for more then 60s.
This is the link to the RabbitMQ Google group thread (thx to Michael Klishin for showing me the right direction)
My application is allowed to have multiple instances running and I would like to log events from all running instances. I am currently using java util logging's socket handler to centralize the logging process. When the first instance starts, it also a starts a new socket server thread. The problem is when this instance is closed, the server thread is also closed and log method (from another instance) throws exception. I am not thinking to run it as a separate process (using Runtime exec) because I would not be able to shut it down gracefully from my application.
So is there a way where another instance on seeing the server down, create a new server thread? This similar approach is being done in H2 database AUTO_SERVER mode where it automatically switches to client and server mode.
So any suggestions on how to do this?
I ended up using Logback's prudent mode.
I think that you might need a singleton with a factory method to initialize the socket server if it is not already running:
We've got a normal-ish server stack including BlazeDS, Tomcat and Hibernate.
We'd like to arrange things such that if certain errors (especially AssertionError) are thrown, the current thread is considered to be in an unknown state and won't be used for further HTTP requests. (Partly because we're storing some things, such as Hibernate transaction session, in thread-local storage. Now, we can catch throwables and make sure to roll back transactions and rethrow, but there's no guarantee about what other code may have left who knows what in the thread-local storage.)
Tomcat with the default thread pool behavior reuses threads. We tried specifying our own executor, which seems to be the most specific method of changing its thread pool behavior, but it doesn't always call Executor.execute() with a new task for each request. (Most likely it reuses the same execution context for all requests in the same HTTP connection.)
One option is to disable keepalive, so that there's only one request per HTTP connection, but that's ugly.
Anyway, I'd like to know. Is there a way to tell Tomcat not to reuse a thread, or to kill or exit the thread so that Tomcat is forced to create a new one?
(From the Tomcat source, it appears Tomcat will close the connection and abandon the task/thread after sending a HTTP 500 response, but I don't know how to get BlazeDS to generate a 500 response; that's another angle I'd like to know more about.)
I would strongly suggest simply getting rid of your use of thread-local storage, or at least coming up with a method to clear the thread-local storage when a request first enters the pipeline (with a <filter> in web.xml, for example).
Having to reconfigure something basic about Tomcat to get it to work with your app points to a code smell.