I have an Java application which is using an EJB client to make a calls to the server. Server side EJB's create again calls to the DB. Hibernate is used, but second level cache is disabled. When performance testing DB calls using EJB client, usually the first calls take much longer than the next ones, even though calls are made using different parameters. What things can explain or have effect on the performance in this scenario?
EJB's on the top uses Java's RMI calls. Whenever your client calls the EJB components running in an EJB container, then in the meanwhile some expensive operations are performed.
Lets say, you have a remote interface called HelloRemote and a service bean called HelloBean which implements the HelloRemote interface.
1) First your client needs to look-up for the service object which is registered with the JNDI Registry as :
HelloRemote service=(HelloRemote)context.lookup("HelloBean/remote");
// JBoss specific
Here,
i) Your client makes a call to LDAP Server which contains the JNDI Registry and looks for the remote bean registered with the given name.
ii) If the any bean registry with the given name is found then the container first serializes the registered bean object and sends it to the client where it is again deserialized which is also known as marshaling and unmarshaling.
The above step in itself is somewhat costly and that is why it is recommended that if your application architecture is going to use the same JVM for both the client application and the session beans, then use the local interface ( which can be a no-interface option starting from EJB 3.1) instead of using the remote interface.
iii) With the Business interface stub in hand, you now invoke the business operations on that stub interface. This stub performs the required serialization and delegates the call to the container.
The container now performs the deserialization and invokes the corresponding methods on Business interface object. The same process occurs when the result of this invocation is being returned to the cilent.
As you can see there is a lot of Serialization and De-Serialization happening when a client communicates with the EJB component.
Enabling Second Level cache does improves the performance in Hibernate, but In view the communication b/w an EJB Client and an EJB service object is more expensive w.r.to time.
NOTE : I was looking for someone to respond to this post, as i was also eager to know the reasons, however not finding any responses, I had this post with the whatever understanding i have on these topics
Related
I'm working on an application with a large number of Remote EJB service methods, and I'd like to have some useful information about the client calling the methods (other than very basic information such as IP address...).
I found this question but it's a bit dated :
How can I identify the client or caller of an EJB within the request?
Is there some kind of custom client context / work area in which I could put the caller details and receive them on server side inside a thread local ?
Basically do I have another option than adding a parameter to every single method of every service ?
With security enabled you have the possibility to retrieve the current user. This is pretty simple, but probably won't fit all needs.
In general, there is nothing you can use out-of-the-box. Adding some custom parameter is probably the worst option.
JBoss and Wildfly are offering the possibility to use EJB client- and server-side container interceptors. Usage and implementation details depend on the version of your application server.
I implemented this by utilizing the MDC (mapped diagnostic context) of our logging framework to enhance server side logging with caller information. You can think about this like using a ThreadLocal. Of course, you need something like a caller context on the client side holding the specific information. Global remote client data (ip address, ...) can be set within the client interceptor, too.
A coarse overview what I did:
Configure client- and server-side logging to use additional MDC data
Enhance client side MDC with key/value data
Client-side interceptor enxtracts the data from the MDC and puts it on the invocation context
Server-side interceptor extracts the data from the invocation context and enhances server side MDC
This approach is working, but depending on your application complexity (e.g. with server2server calls, bean2bean calls on local asynch EJBs, ...) complexity increases. Don't forget to think about cleaning up e.g. your ThreadLocal data to avoid possible memory leaks.
As per my knowledge, in the EJB 2.x, the client uses the home interface to ask for a reference to the component interface and calls Enterprise java bean’s business method using that reference.
But the concept of stub and skeleton are not clear to me.
Is the reference to the component interface acts as a stub? Then which one act as a skeleton?
Please clarify.
Stub and skeleton are actually RMI concepts, EJB is just reusing them. As such, they are only needed when you are using remote interfaces.
Stub is used by the client to invoke methods on the remote EJB -- it is basically a proxy object that implements the remote interface. It is responsible for serializing the invocation into a byte stream and sending it to the server hosting the EJB.
Skeleton is running on the server side -- it receives the remote calls from the stub over the network, deserializes the invocation and and delegates it to the EJB.
See also: Java RMI : What is the role of the stub-skeleton that are generated by the rmic compiler
Nowadays, stubs and skeletons are typically generated at runtime (or the same function is just handled via reflection), so you do not need to worry about them (see also Do I need RMI stubs to access EJBs from my java client? - this is specific to Glassfish, but the general principles usually apply also to other containers).
Skeletons have been obsolete since 1998. Don't worry about them.
Well stub and skeleton are just there when you are using Remote interfaces.
Stub is an object implementing Remote interface (implemented usually by code generation) and skeleton is implemented inside the container and invokes method on the EJB (inside the container).
When I use JNDI to get an object from a remote server, the object can be serialised to the local JVM, this way I am assuming that we can call methods on this object locally without RMI, so why we need RMI?
JNDI is a look-up and directory service. It provides a standardized way to acquire resources by name within some context. Usually it used for acquiring shared resources from an application-server context, but depending on implementation, it can also provide for looking up items in a standardized way that represent remote resources.
RMI is a remote method invocation technology built-in to the Java platform. It allows for calling remote java object methods over a binary protocol. It uses Java's built-in serialization handling to make the remote invocation and parameter passing over the network seem transparent. RMI requires it's own directory/look-up service that or might not be integrated with a given JNDI implementation. (Usually they are not integrated.)
So, with all that in mind, hopefully you can see why your question isn't very clear. You might look-up a remote RMI service via JNDI. You might be able to save (serialize) that remote RMI reference to disk and then reconstruct it to use it again later (although that is probably not a good idea.) But regardless, JNDI and RMI are two different things.
When I use JNDI to get an object from a remote server, the object can be serialised to the local JVM, this way I am assuming that we can call methods on this object locally without RMI, so why we need RMI?
So you can call methods remotely. An object that has been deserialized into your local JVM executes in your JVM. A remote object executes in a remote JVM even though you called it from your local JVM.
I am assuming that we can call methods on this object locally without
RMI
No,It is important to understand that you need two extra objects when you make a remote method invocation. The Stub which runs on the client side and de Skeleton which runs on the server side. These objects performs the necessary low level operations.
When the client invokes a remote method it never call directly the object, instead it uses the Stub object.
Therefore, what you get from JNDI service is the Stub not the remote object.
(1) Okay I am pretty confused about the threading model of JAX-WS Java web services. I read they are not thread-safe. How are they supposed to serve multiple parallel requests then? Given that its always known (mostly) they are going to get called from multiple clients at the same time.
(2) And does the app server create a new instance of web service for each request (like it maintains a pool of stateless session beans, assigns one out for a request and once the request completes, it is returned to the pool). can you configure that pool size in app server console (GlassFish or JBoss or WebSphere).
(3) And I also found out about #Threadsope annotation here which creates new thread per request..
http://jax-ws-commons.java.net/thread-scope/
Is that a good option? I am sure people are solving the thread-safety and parallel requests issues in some other standard way - please advise.
An application server contains a pool of beans.
When working with stateless session bean, it is not guaranteed you will get the same instance across working with the session.
However, since as I mentioned, the beans are managed by a pool, holding a state in them, is a bad idea.
I don't think that EJB beans have anything to do with what your need, though.
Pay attention that in the example you provided, Both DataService and the connection are created per request. This is a bit expensive.
I would consider using the ThreadLocal API only for the connection, and have it obtained from a connection pool.
You can implement these on your own, by reading about ThreadLocal and by reading about DB connection pools.
To conclude - I don't think EJBs are relevant here.
Don't hold both your service class and the fields at the thread local, but only the necessary fields you will allocate per request. (in the example you showed - it's the connection)
I am reading the Enterprise JavaBeans 3.1 book and I wonder if I have understood the concept of EJB proxy object correctly. I now know that it follows the proxy pattern and I have read some about it.
When we make the interfaces for the beans, we are doing it because we want to have the proxy pattern implemented. This helps us because the clients are only concerned about what we can do and not tied to directly to a class but rather a interface that can act as if it where the real object.
So, the container probably instantiate the proxy objects implementing the corresponding interface and add some magic code (networking code) before invoking upon the real EJB for us, because the proxy object is automatically made right?
Have I misunderstood the concept? If thats the case could someone tell me whats wrong?
Correct. The interfaces you're writing for your beans would have been sufficient if your application was confined to a local JVM. In this case no proxy would be needed, as the implementing class could be instantiated and supplied directly.
Clients of EJBs cannot work on their implementing classes, as they don't have them in their classpath. EJBs are location-transparent, you can call them across the network, or from another application located on the same server, but isolated out by different class loaders. In such cases, you need to have proxy objects to marshal, send over the network, and unmarshal the parameters you supply to EJB calls and results of these calls that you receive. And on the client side, you need a dummy EJB interface implementation, that forwards your calls to the server where this EJB is installed.
Proxies also handle other functions, such as starting/ending transactions around EJB method calls.
EDIT: if you're curious what EXACTLY such proxies could do, take a look at overviews of RMI in Java and AOP (either in AspectJ or Spring). It will give you the idea what kinds of tasks can be implemented this way.
Are you referring to the proxy interfaces to stateless (and stateful) session beans and message driven beans?
If so, I think your understanding is correct. The only thing you seemed to have missed is the concept of instance pools for stateless beans. The container does not instanciate these per request, but instead provides an implementation when needed.
Also, using proxies allows the container managed things to take place: transaction management, asynchronous thread management etc.