Assume that processes in a distributed application are using RMI for interactions between
each other. How can deadlock occur? How to avoid it?
You can get a deadlock via RMI in a system that doesn't deadlock without RMI if you use callbacks. A local callback is executed on the calling thread; however an RMI callback is executed on a different thread from the original client calling thread. So if there is client-side synchronization, a deadlock can occur that wouldn't occur if the calls were all local.
In the local JVM case, the JVM can tell that the calling object "A" owns the lock and will allow the call back to "A" to proceed. In the distributed case, no such determination can be made, so the result is deadlock. Distributed objects behave differently than local objects. If you simply reuse a local implementation without handling locking and failure, you will probably get unpredictable results. Since remote method invocation on the same remote object may execute concurrently, a remote object implementation needs to make sure its implementation is thread-safe. For example when one client logs on to the server in order to maintain security and to avoid deadlock the same customer will not be allowed to logon to the server from another machine. This is done by creating Session Flag.
Related
I'm researching whether Javamail is threadsafe, in particular in a situation with many sessions corresponding to different users, several SMTP servers and the use of creating MIME messages and use of transport.sendMessage method. I know Javamail is oriented toward desktop-use which makes me suspect it may not have been built with threading in mind, and am wondering if anyone has such experience.
Admittedly the thread safety rules for JavaMail are not well documented, but hopefully they mostly match what you would expect.
Multiple threads can use a Session.
Since a Transport represents a connection to a mail server, and only a single thread can use the connection at a time, a Transport will synchronize access from multiple threads to maintain thread safety, but you'll really only want to use it from a single thread.
Similarly, a Store can be used by multiple threads, but access to the underlying connection will be synchronized and single threaded.
A Message should only be modified by a single thread at a time, but multiple threads should be able to read a message safely (although it's not clear why you would want to do that).
The javamail dispatcher threads doesn't seem to timeout if the server doesn't respond in time. this leads to locking on all available threads.
Tested this behavior with both 1.4.3 & 1.4.5.
I am writing a client-server application using Java-RMI. Some server-side ressources have to be accessed in mutual exclusion (I am using locks for that purpose).
Now I was wondering what happens when:
The client calls a remote method on the server
The remote method acquires a lock for a critical section
The client crashes before the remote method exits the critical section
Will any locks acquired by the remote method call associated to that client be released? Or will it just be impossible for other clients to acquire the lock afterwards?
Thanks for your answers
What happens is that the remote method keeps executing until it is done, and releases the locks when it exits the critical section. Then it attempts to return the results (if any) to the client, and fails because the connection has been broken.
There is no particular hazard here ...
Of course, if the server is using Lock objects rather than primitive locks / mutexes, then it needs to do the lock releases in a finally block to deal with the case where it fails due to some unexpected exception. But this is a different issue. The client crashing won't trigger that scenario.
We've got a normal-ish server stack including BlazeDS, Tomcat and Hibernate.
We'd like to arrange things such that if certain errors (especially AssertionError) are thrown, the current thread is considered to be in an unknown state and won't be used for further HTTP requests. (Partly because we're storing some things, such as Hibernate transaction session, in thread-local storage. Now, we can catch throwables and make sure to roll back transactions and rethrow, but there's no guarantee about what other code may have left who knows what in the thread-local storage.)
Tomcat with the default thread pool behavior reuses threads. We tried specifying our own executor, which seems to be the most specific method of changing its thread pool behavior, but it doesn't always call Executor.execute() with a new task for each request. (Most likely it reuses the same execution context for all requests in the same HTTP connection.)
One option is to disable keepalive, so that there's only one request per HTTP connection, but that's ugly.
Anyway, I'd like to know. Is there a way to tell Tomcat not to reuse a thread, or to kill or exit the thread so that Tomcat is forced to create a new one?
(From the Tomcat source, it appears Tomcat will close the connection and abandon the task/thread after sending a HTTP 500 response, but I don't know how to get BlazeDS to generate a 500 response; that's another angle I'd like to know more about.)
I would strongly suggest simply getting rid of your use of thread-local storage, or at least coming up with a method to clear the thread-local storage when a request first enters the pipeline (with a <filter> in web.xml, for example).
Having to reconfigure something basic about Tomcat to get it to work with your app points to a code smell.
What do I need to worry about when doing callbacks in RMI? I just need a simple client notification mechanism to avoid excessive polling.
I found an online example and it looks pretty straightforward, the client just implements an interface that extends Remote (like the server does) and passes it to the server, which can then call back its methods. I'm guessing the remote callback can occur on any thread, so I have to assume it will be asynchronous to my client application's normal threads. What else is there?
Two things.
RMI callbacks almost certainly won't work through firewalls
RMI callbacks execute on a different thread from the original client call to the server. You can get unexpected synchronization deadlocks if you don't take that into account.
i have two clients in two different processes that communicate through RMI with the server.
my question is:
what happends if both clients invoking the server's stub at the same time?
thanks for you time,
me
This tutorial demonstrates the threaded nature of RMI servers (see task 7.1). They quote from the RMI spec:
A method dispatched by the RMI runtime
to a remote object implementation (a
server) may or may not execute in a
separate thread. Calls originating
from different clients Virtual
Machines will execute in different
threads. From the same client machine
it is not guaranteed that each method
will run in a separate thread
so invocations from different clients will result in execution via different threads in the server.
Nothing untoward by default - it's exactly the same as invoking a method on any other object from two threads simultaneously. The 1 server to many clients model is what network protocols like RMI are for.
Access to any shared data within the server needs to be regulated by synchronized blocks if need be. It depends what the server is doing.