using JPA in multi-thread RMI architecture - java

I am designing an RMI based data service server for different Java clients. The clients use RMI to perform CRUD operations remotely, and I plan to use JPA for the server's ORM.
As I know some RMI requests implementation to be thread-safe, so I am planning to inject the EntityManager using #PersistenceContext. I have two questions.
Does spring make the EntityManager injected thread safe, or should I inject EntityManagerFactory and call createEntityManager when necessary?
Do I still have to make sure synchronized when method code is guaranteed to be thread safe?
According to the RMI specification
When a remote request comes in, it is immediately demarshalled into a request object
that encapsulates the method invocation. This request object, which is an instance of a
class implementing the RemoteCall interface, has a reference to the socket's output
stream. This means that, although RMI shares sockets, a socket is used only for one
remote method invocation at a time.
The thread that received the request from the socket finds the intended remote object for
the method invocation, finds the skeleton associated with that remote object, and invokes
that skeleton's dispatch( ) method. The dispatch method has the following signature:
public void dispatch(java.rmi.Remote obj,
java.rmi.server.RemoteCall call, int
opnum, long hash) throws java.lang.Exception
The skeleton's dispatch( ) method invokes the right method on the server. This is
where the code you wrote is actually executed.
The server method returns a value, which is eventually propagated back through the
socket on which the original request was received.
I think the process definition suggests many separated call stacks of our code could be created in RMI environment. therefore, RMI requires code to be Thread safe, am I right?

When you export an object via RMI, it has to deal with multiple threads, even if you just have a single client for the object. Here's why: The thread on which you create the remote object is definitely different from the thread that handles remote invocations.
So if you inject the EntityManager during the creation of the remote object, it will have been injected on a different thread than the one it is used on during a remote call. However, an EntityManager can only be used on a single thread and more specifically on the thread on which is was created. With Hibernate, for example, your EntityManager will not work unless this is the case.
So you have to use an EntityManagerFactory to create the EntityManager on demand. To minimize EntityManager creations, you could store the EntityManager in a ThreadLocal.

RMI requires code to be Thread safe, am I right?
Strange you are are asking a question that has already been answered in comments, but regardless of what you've quoted above there is a much shorter piece in the RMI Specification #3.2 that says exactly that:
"a remote object implementation needs to make sure its implementation is thread-safe."

Related

How can I make a method execute when any main has completed using the java object?

I have a Java object that is a client in a RMI model and the RMI server implements a locking mechanism that only allows access from one client at a time. In order for the server to be made available for a new client, the existing client must remove its lock by calling a disconnect() method before its thread dies.
The issue I'm having is that if the client doesn't call disconnect(), the server will never free up.
Is it possible to enforce that whenever ANY main making use of the client has finished running, it calls disconnect() WITHOUT specifying so in the main?
I read about the finalize() method which is called during garbage collection, but from my understanding it is extremely unreliable.

Concurrent access to a Remote Object Java RMI

I am currently studying how Java RMI works but I do not understand a certain aspect.
In a non-distributed multithredaded environment if methods on the same object are called simultaneously from different threads each of them will be executed on the respective thread's stack (accessing shared data is not a part of my question).
In a distributed system since a client process calls methods on the stub and the actual call is executed on the stack of the process that created the remote object how are simultaneous calls to a method handled? In other words what happens at the lets say server thread when there are two (or more) requests to execute the same method on that thread?
I thought of this question as I want to compare this to what I am used to - the executions being on different stacks.
how are simultaneous calls to a method handled?
It isn't specified. It is carefully stated in the RMI specification: "The RMI runtime makes no guarantees with respect to mapping remote object invocations to threads."
The occult meaning of this is that you can't assume the server is single-threaded.
In other words what happens at the lets say server thread when there are two (or more) requests to execute the same method on that thread?
There can't be two or more requests to execute the method on the same thread. The question doesn't make sense. You've posited a unique 'lets say server thread' that doesn't actually exist.
There can however be two or more requests to execute the method arising from two or more concurrent clients, or two or more concurrent threads in a single client, or both, and because of the wording of the RMI Specification you can't assume a single-threaded despatching model at the server.
In the Oracle/Sun implementation it is indeed multi-threaded, ditto the IBM implementation. I'm not aware of any RMI implementation that isn't multi-threaded, and any such implementation would be basically useless.

Are java RMI remote objects (server) singleton?

I have been using java RMI for a while now but I couldn't figure out if the RMI Remote Stubs (on the server side) are singleton? The reason I ask is:
lets assume that one of the RMI implementation methods lower down in the chain of calls have a synchronized method. If for some reason the logic in the Synchronized Method is messed up (or hangs), the future RMI calls (from the client) will hang too while trying to get access to that synchronized method. This will hold true only if the RMI stubs are going to be singleton. If a new object is created on the server side at every remote call from the client, this won't be a problem because than the methods are being called from a different object and synchronized method won't be an issue anymore.
Long story short. I am trying to understand how JVM internally maintains rmi remote objects on the server side and if they are singleton. I tried many different javadocs but they don't explicitly mention this anywhere.
Any and all help is appreciated !
EDIT
Based on some questions and comments, I am refining the question: my real question is, does RMI on the server side happen to keep some kind of an object pool based on what one object you export and register ? Can you bind more than one object of the same type with the same name (somewhat simulating an object pool where RMI can give me any of the objects that I registered) or in order to have multiple instances of the same object, I will have to register them with different names
First of all, the "stub" is a client-side concept, there are no stubs on the server.
As for the remote objects themselves, the RMI system doesn't instantiate the objects for you, it's up to you to create instances and export them. You create one instance of the object, export that object, and bind it in the registry under a particular name. All calls on client stubs obtained from that same name in the registry will ultimately end up at the same object on the server.
Can you bind more than one object of the same type with the same name (somewhat simulating an object pool where RMI can give me any of the objects that I registered)
No, you can only bind one object in the registry under a given name. But the object you bind could itself be a proxy to your own object pool, for example using the Spring AOP CommonsPoolTargetSource mechanism.
RMI its based on proxy design pattern.
See what says here
A RMI Server is an application that creates a number of remote objects. An RMI Server is responsible for:
Creating an instance of the remote object (e.g. CarImpl instance = new CarImpl());
Exporting the remote object;
Binding the instance of the remote object to the RMI registry.
Stubs are not singletons, but your question is really about the server-side objects. They are not singletons either, unless you implement them that way yourself. RMI doesn't do anything about that whatsoever.
EDIT Based on some questions and comments, I am refining the question: my real question is, does RMI on the server side happen to keep some kind of an object pool based on what one object you export and register?
No.
Can you bind more than one object of the same type with the same name
No.
I will have to register them with different names
You don't have to register them at all. You need one singleton remote object bound into the Registry: consider that as a factory method for further remote objects, which are returned as results from its remote methods. For example, a remote Login object is bound in the Registry and has a single login() method that returns a remote session object, a new one per login, with its own API.
From the Java docs:
http://docs.oracle.com/javase/7/docs/platform/rmi/spec/rmi-arch3.html
A method dispatched by the RMI runtime to a remote object
implementation may or may not execute in a separate thread. The RMI
runtime makes no guarantees with respect to mapping remote object
invocations to threads. Since remote method invocation on the same
remote object may execute concurrently, a remote object implementation
needs to make sure its implementation is thread-safe.
Yes, the server side method is synchronized. The implementation is platform-specific. You cannot assume anything else about threading. And you certainly cannot assume whether or not the remote object is a singleton.
Also, it might be useful to look at Remote Object Activitation:
http://docstore.mik.ua/orelly/java-ent/jenut/ch03_06.htm
http://docs.oracle.com/javase/7/docs/api/java/rmi/activation/package-summary.html

Java Websocket: Can Endpoint.onError be called during a send operation

I am trying to work out if javax.websocket.Endpoint.onError (and thus the resulting methods in say Spring) can be called during a call to any of the websocket send methods (e.g. javax.websocket.RemoteEndpoint.Async.sendText or thin wrappers in say Spring), at least for the specific javax.websocket.Session, since if it can I need to make sure my server implementation regarding state associated with that socket is re-entrant, which complicates it.
The method is documented here:
https://javaee-spec.java.net/nonav/javadocs/javax/websocket/Endpoint.html#onError%28javax.websocket.Session,%20java.lang.Throwable%29
It only mentions errors regarding incoming data. So I think it is safe to say the send will never itself cause it to be called (rather than passing an Exception to the send handler, or throwing an IOException from the basic remotes), but is incoming data processed while a send is in progress, and can that result in the method being called (presumably from another thread, threading details seem a bit thin as well...).

Caching remote EJB 3.0 reference

I was thinking how could i save time on looking up remote ejb reference through jndi. I had an application that needed to work very fast, but it also had to call remote ejb which slowed it down.
So my solution was something like this:
I took apache commons-pool library and used its StackObjectPool implementation for my remote ejb references cache.
private static final ObjectPool pool = new StackObjectPool(new RemoteEjbFactory());
Factory looks something like this:
public static class RemoteEjbFactory extends BasePoolableObjectFactory {
#Override
public Object makeObject() {
try {
return ServiceLocator.lookup(jndi);
} catch (NamingException e) {
throw new ConfigurationException("Could not find remote ejb by given name", e);
}
}
}
Then i take object by borrowing it from pool (if no free object in pool it uses factory to create one):
SomeEjbRemote someEjb = null;
try {
someEjb = (SomeEjbRemoteImpl) pool.borrowObject();
someEjb.invokeRemoteMethod();
} catch (Throwable t) {
if (someEjb != null) {
pool.invalidateObject(someEjb);
}
pool.clear(); // Maybe its not neccessary
someEjb = (SomeEjbRemoteImpl) pool.borrowObject();
someEjb.invokeRemoteMethod(); // this time it should work
}
And of course returning ejb back to pool after successful invokacion
finally {
try {
pool.returnObject(someEjb);
} catch (Exception e) {
logger.error("Could not return object to pool.", e);
}
}
As i understand there is no guarantee that remote reference will stay connected so if we catch exception using cached remote ejb, we just invalidate that object and retry.
What do you think about such approach? Is it correct? Maybe some other solutions, advices?
From the spec
3.4.9 Concurrent Access to Session Bean References
It is permissable to
acquire a session bean reference and
attempt to invoke the same reference
object concurrently from multiple
threads. However, the resulting client
behavior on each thread depends on
the concurrency semantics of the
target bean. See Section 4.3.14 and Section 4.8.5 for details of
the concurrency behavior for session beans.
Summary of § 4.3.14:
If the bean is SLSB, each call will be served by one EJB in the app. server pool. The app. server synchronizes the calls the EJB instances, so each EJB instance are never accessed concurrently.
For SFSB, each call is dispatch to one specific EJB instance, and the app. server does not synchronises the call. So two concurrent calls to the remote reference might lead to a concurrent access to the EJB instance which raises then a javax.ejb.ConcurrentAccessException. The client is responsible of the correct synchronization of the access to the remote reference.
And § 4.8.5 is about EJB singleton, probably not what you are using.
I assume you use SLSB, so you don't need to have a pool on the client-side: look up the remote bean once, and use the same reference from multiple threads.
You could however do a benchmark to see if using multiple reference improves performance, but the gain -- if any -- is probably neglectable compare to the cost of the remote invocation itself.
If then you still decide to have more than one remote reference, I would suggest an other design. Based on your question I assume you have a multi-threaded app. You probable already use a pool for the threads, so a pool for the reference is maybe redundant. If each thread gets a remote reference when it is created, and threads are pooled, there won't be that many remote lookup and the design is simplified.
My 2 cents
I'm answering for JBoss AS since I have limited experience with other AS:s.
Remote JNDI-references are just (load-balancing) connection-less proxies (see JBoss clustering proxy architecture). Serializing them is fine, which means that you can keep them as members in other EJBs, and cache them as you do (I don't know if your pool serializes your objects, some caches do).
Regarding invalidation of proxies:
The proxies will open a connection only for the duration of the method call, and therefore does not have a 'connected' state per se. The proxies can additionally have multiple IP-addresses and load-balance. In JBoss the node list is dynamically updated at every method call, so the risk of a reference to go stale is small. Still there is a chance for this to happen if all nodes go down or the proxy remains inactive while all node IP addresses go stale. Depending on the pool reuse policy (LRU or other?) the the probability that the rest of the cached proxies are invalid once one is will vary. A fair policy will minimize the risk of having very old entries in the pool, which you would like to avoid in this scenario.
With a fair policy in place, the probability that all go stale for the same reason increases, and your 'clear pool once one is stale' policy would make sense. Additionally, you need to take the case of the other node being down into account. As it is now, your algorithm would go into a busy-loop looking up references while the other node is down. I would implement an exponential back-off for the retries, or just consider it a fatal failure and make the exception a runtime exception, depending on whether you can live with the remote EJB being gone for a while or not. And make the exception you catch specific (like RemoteCommunicationFailedException), avoid catching generic exceptions or errors like Exception, Error or Throwable.
Another question you must ask yourself is the amount of concurrency that you want. Normally proxies are thread-safe for SLSBs and single thread only for SFSBs. SFSBs themselves are not thread safe, and SLSBs serialize access per default. This means that unless you enable concurrent access to your EJB 3.1 beans (see tss link) you will need one remote reference per thread. That is: pooling N SLSB remote references will give you N threads concurrent access. If you enable concurrent access and write your SLSB as a thread-safe bean with the annotation #ConcurrencyAttribute(NO_LOCK) you can get unlimited concurrency with just one proxy, and drop your whole pool. Your pick.
EDIT:
ewernli was right, the thread-safe SLSB proxy creates one new instance on the server per call. This is specified in 4.3.14:
There is no need for any restrictions
against concurrent client access to
stateless session beans because the
container routes each request to a
different instance of the stateless
session bean class.
This means that you don't need any pool at all. Just use one remote ref.

Categories