I'm researching whether Javamail is threadsafe, in particular in a situation with many sessions corresponding to different users, several SMTP servers and the use of creating MIME messages and use of transport.sendMessage method. I know Javamail is oriented toward desktop-use which makes me suspect it may not have been built with threading in mind, and am wondering if anyone has such experience.
Admittedly the thread safety rules for JavaMail are not well documented, but hopefully they mostly match what you would expect.
Multiple threads can use a Session.
Since a Transport represents a connection to a mail server, and only a single thread can use the connection at a time, a Transport will synchronize access from multiple threads to maintain thread safety, but you'll really only want to use it from a single thread.
Similarly, a Store can be used by multiple threads, but access to the underlying connection will be synchronized and single threaded.
A Message should only be modified by a single thread at a time, but multiple threads should be able to read a message safely (although it's not clear why you would want to do that).
The javamail dispatcher threads doesn't seem to timeout if the server doesn't respond in time. this leads to locking on all available threads.
Tested this behavior with both 1.4.3 & 1.4.5.
Related
I have read a lot of material to try and clearly understand the gains a Jetty Non Blocking Web Application Server can or can't offer.
So far what I understand (in part by referring to this: How do Jetty and other containers leverage NIO while sticking to the Servlet specification?) is that with a non blocking IO model a web server like Jetty runs a single (or one per CPU core) thread - the Selector thread - that determines connections that are ready for some I/O. Connections that are ready with some I/O are dispatched for processing on to an internal thread pool to process the request.
I can see how such an architecture could allow you to serve many more connections with far fewer resources. However, what I am not clear about is this:
If I wrote a servlet that ran a long running database operation using a standard JDBC driver performing blocking I/O, wouldn't the handler thread dispatched from the pool to handle this request block?
And if requests came through faster than database requests are fulfilled, the handler thread pool would exhaust at some point?
And so with an application such as this is there any benefit to be run on a Non Blocking Jetty webserver? Is the non-blocking benefit only truly accrued if the servlet itself used another layer of non-blocking access to the database? Or is there something I am missing?
Please do explain if there's some magic through which Jetty will pay less of a price for the blocking database operations than say, a blocking web server.
P.S: For a contrast I read about Node.js here - How the single threaded non blocking IO model works in Node.js - it seems to suggest that Node uses libuv underneath and applies other techniques to translate all blocking operations in code (such as database access and sleep()) into event callbacks ensuring the event loop and the internal thread pool never get blocked in a blocking callback. While it's still a little gobbledygook to me, but assuming that's true for Node, can Jetty promise the same? That too for servlets etc that are not written in a non-blocking way?
I have a Scala application which maintains (or tries to) TCP connections to various servers for hours (possibly > 24) at a time. Each server sends a short, ~30 character message about twice a second. These messages are fed into an iteratee where they are parsed and eventually end up making state changes to a database.
If any of these connections fail for any reason, my app needs to continually try to reconnect until I specify otherwise. Any messages getting lost is Bad. I have no control over the servers I connect to, or the protocols used.
It is conceivable there would be as many as 300 of these connections at once. No exactly a high-load scenario, so I don't think NIO is needed, though it might be nice to have? Other bits of the app are high-load.
I'm looking for some sort of socket controller / manager which can keep these connections as reliably as possible. I am running my own blocking controller now, but as I'm inexperienced with socket coding (and all the various settings, options, timeouts, etc.) I doubt its will achieve the best possible uptime. Plus I may need SSL support at some point down the line.
Would NIO offer any real advantages?
Would Netty be the best choice here? I've seen the Uptime example here, and was thinking of simply duplicating it, but being new to lower-level networking I wasn't sure if there were better options.
However I'm uncertain of the best strategies for ensuring as few packets are lost as possible, and assumed this would be a "solved" problem in one library or another.
Yup. JMS is an example.
I suppose a lot of it would come down to a timeout guessing strategy? Close and re-open a socket too early and you've lost whatever packets were en-route.
That is correct. That approach is not going to be reliable, especially if connections go up and down regularly.
A real solution involves having the other end keep track of what it has received, and letting the sender know when then connection is re-established. If that can't be done, you have no real way of controlling how much gets lost. (This is what the reliable messaging services do ...)
I have no control over the servers I connect to. So unless there's another way to adapt JMS to a generic TCP stream I don't think it will work.
Yup. And the same applies if you try to implement this by hand. The other end has to cooperate.
I guess you could construct something where you run (say) a JMS end point on each of the remote servers, and have the endpoint use UNIX domain sockets or loopback (i.e. 127.0.0.1) to talk to the server. But you still have potential for message loss.
I have a multithreaded java program that runs on Amazon's EC2. It queries and fetches data items from a vendor via HttpPost and HttpGet, using a org.apache.http.impl.client.DefaultHttpClient. Concurrently, it pushes the retrieved data items into S3 using AWS's Java SDK.
After a few days of running, I get the symptoms that normally come with http connection leaks:
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for connection
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute.getEntryBlocking(ConnPoolByRoute.java:417)
at org.apache.http.impl.conn.tsccm.ConnPoolByRoute$1.getPoolEntry(ConnPoolByRoute.java:300)
at org.apache.http.impl.conn.tsccm.ThreadSafeClientConnManager$1.getConnection(ThreadSafeClientConnManager.java:224)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:391)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:820)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:754)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:732)
Since both AWS and my requests to the data vendor use Http connections, I am not quite sure where exactly I forget to HttpEntity.consume(), or S3ObjectInputStream.close() (unless it is yet something else...).
So here is my question: are there ways to monitor org.apache.http.impl.conn.tsccm.ConnPoolByRoute so that at least I can detect when I am starting to leak connections/entities not properly consumed/http streams not closed? (I have a feeling it happens only under certain conditions, e.g. when certain exceptions are being thrown, by-passing the logic in my code that consumes HttpEntities, closes streams, etc.) Any idea on how to diagnose what eventually causes all my http connections to fail with that ConnectionPoolTimeoutException would be most welcome. I don't feel like waiting 4+ days between attempts to fix the root cause of the problem.
If you're using the PoolingClientConnectionManager note there are the methods getTotalStats() and getStats(final HttpRoute route) which will give you a PoolStats object with the data you're looking to monitor.
Just fetch the ConnectionManager from your httpclient:
PoolingClientConnectionManager poolManager = (PoolingClientConnectionManager) httpClient.getConnectionManager();
If you can access the org.apache.http.impl.conn.tsccm.ConnPoolByRoute then set it's connTTL to a low enough value so that it's WaitingThreadAborter will eventually terminate a connection. It will show a nice stacktrace there. The other option is to use CGLIB or some other bytecode manipulating framework to create a proxy class wrapping org.apache.http.impl.conn.tsccm.ConnPoolByRoute. Depending on your environment it might not be that easy to set it up, but it's a rather valuable tool to debug issues like yours. (And yes, if you happen to use spring or just plain Aspects the setup will be supereasy :) )
Assume that processes in a distributed application are using RMI for interactions between
each other. How can deadlock occur? How to avoid it?
You can get a deadlock via RMI in a system that doesn't deadlock without RMI if you use callbacks. A local callback is executed on the calling thread; however an RMI callback is executed on a different thread from the original client calling thread. So if there is client-side synchronization, a deadlock can occur that wouldn't occur if the calls were all local.
In the local JVM case, the JVM can tell that the calling object "A" owns the lock and will allow the call back to "A" to proceed. In the distributed case, no such determination can be made, so the result is deadlock. Distributed objects behave differently than local objects. If you simply reuse a local implementation without handling locking and failure, you will probably get unpredictable results. Since remote method invocation on the same remote object may execute concurrently, a remote object implementation needs to make sure its implementation is thread-safe. For example when one client logs on to the server in order to maintain security and to avoid deadlock the same customer will not be allowed to logon to the server from another machine. This is done by creating Session Flag.
When i try to run 2 wget commands simultaneously to my server (http://myserver), looks like tomcat allocates two threads to process them. But i believe when tomcat receives two simultaneously from same ip address, it will not create a new thread for processing the second request as it considers both the requests come from same session.
If i want to check if both the threads are same or different, is using thread.getId() the only way? I think this id may be reused for new threads. Is there any unique property of the thread existing to check its identity other than threadid?
I suggest to never rely on threads to identify their source. There are no Servlet spec guarantees about threads, and newer Servlet spec implementations make use of NIO. You are skating on a thin ice.
Web servers will almost always assign multiple threads (or processes) to multiple simultaneous requests, since the client can work faster when it does not have to wait for each response.
Newer servers may use asynchronous IO (nio), however, and a single thread can simultaneously serve many clients.
Yes, Thread.getId() is a way of identifying threads.
Session IDs are the mechanism used to identify requests from a single client.
The IP address is not a good way to do that, since multiple machines can expose the same IP when hiding behind a NAT.
I believe Tomcat will always create a new thread of execution irrespective of whether it comes from the same IP or not. In case, the client application running on the particular IP has a mechanism to send across the session-id, then Tomcat will simply associate the same session context with the request thread [making it stateful].
in your case, you'll need to customise wget to hold on to the session-id [the Tomcat web-app might send it across through a cookie or as a url parameter - jsessionid]. wget will then need to send it back with the subsequent requests [url rewrite and include the jsessionid parameter, or exchange cookies]. this way Tomcat will be able to treat each request coming from a unique client instance and associate a state with it.