At a project I am working on, we have the need to add some additional parameters to our ejb calls.
We don't put those parameters as regular ejb interface parameters because we already have lots of interfaces and changing most of them would be a pain (plus it's not related to the logic, these are some additional parameters being saved in the database, call it a side effect).
So to the point, we use weblogic as our application server (10.3.4, Java 6, JavaEE 5) and the solution we found for this problem was to make a wrapper ejb that will accept the real method target, its arguments and of course the additional parameters.
The wrapper will then store the additional parameters on the transaction, create a new initial context and lookup the target ejb.
The wrapper ejb function looks like this:
public Object invokeWithAdditionalParameters (MethodSignature methodSignature, Object[] args, SomeAdditionalParameter p) {
transaction.putResource("additionalParam", p);
InitialContext ctx = new InitialContext();
Object service = ctx.lookup(methodSignature.buildLookupName());
return methodSignature.getMethod().invoke(service, args);
}
MethodSignature holds the targeted ejb method we would like to invoke.
I don't know if this solution is acceptable and if we should cache our InitialContexts, as we are already in the server and don't need to make expensive network calls to the jndi tree, but maybe I'm wrong...
Thank you,
Aviad.
Related
I'm on learning phase of Spring boot
I've code where its written like below to handle exception in whole application. Not sure how its working, but I have NoDataFoundException class in code and its being used at place where no data found issues are happening.
#ControlAdvice
class ControllerAdvisor {
#ResponseStatus(HttpStatus.BAD_REQUEST)
#ExceptionHandler(NoDataFoundException.class)
#ResponseBody
public ResponseEntity<Object> handleNodataFoundException(
NoDataFoundException ex, WebRequest request) {
Map<String, Object> body = new LinkedHashMap<>();
body.put("timestamp", LocalDateTime.now());
body.put("message", "No cities found");
return new ResponseEntity<>(body, HttpStatus.NOT_FOUND);
}
Want to know, how and when handleNodataFoundException method automatically gets called when NoDataFoundException instance gets created ?
Does spring calls this method handleNodataFoundException on the basis of #ExceptionHandler(NoDataFoundException.class) which is bind to method itself and moreover irrespective of name of the method ?
how spring looks for parameters required for above method ? what if it has more parameters in it ?
This is done by proxying (Proxy.class).
Proxying is a mechanism in which a kind of pseudo class is created dynamically and mimics your own class (same methods) and is used to intercept all the method calls. This way it can act before and after the method calls as it please, and in the middle call the real method that you developed.
When you create a #Service in Spring, or a #Stateless in EJB, you never actually create the instances, you delegate the instance creation on the framework (Spring or EJB). Those frameworks proxy your classes and intercept every call.
In your case, and putting it simple, the proxy has a catch around the real call to your method, that catch captures exceptions, and acts upon the framework configuration built based on all the annotations that you created (#ControlAdvice, #ExceptionHandler and so on). And so it can call the handleNodataFoundException(...) in the cases that you defined with annotations.
Update visualization via stacktrace
For instance, if you have two Spring #Component (or #Service, #Repository or whatever), and one calls the other one in a plain call, and you get an exception, in the stacktrace you see plenty of method calls (involving all kind of different classes) between your two component classes, all those are proxies and framework classes that take care of proxying, invoking, configuration and all the magic that the framework does. And everything is triggered by the proxy of your second component just before calling the real code that you developed, because the first component, at execution time, doesn't really call an instance of your class, but an instance of the proxy of your class that the framework created.
If you run a plain main, with two classes calling one to the other, instantiated by you with new, you will only see 3 lines in the stacktrace, because there the instances are plain instances created by you.
We have several legacy java-services with RMI-api, implemented by the old JRMP approach requiring 'rmic' pre-compilation.
As part of migrating everything to latest JDK,
I am also trying to rewrite the RMI stuff to the more current approach, where the implementation-classes extend from UnicastRemoteObject, thus getting rid of the rmic pre-compilation step.
Following a simple example, like here:
https://www.mkyong.com/java/java-rmi-hello-world-example/
but I have not been able to find such example with commit/rollback transaction-logic.
In the current legacy-code, all transaction-logic is handled in a single, common method invokeObject() in the JRMP container-code that will intercept all RMI api-calls in one place,
and will simply commit if the RMI-call is successful, or rollback if an Exception was thrown.
I haven't been able to figure out how to do this in the new approach with no JRMP container present.
Obviously, I do not want to code commit/rollback-logic into every single api-method (there are many dozens of them),
but still keep that uniform logic in one place.
Any advice, hints, references, etc, how to intercept all RMI-calls in a single point to implement the transaction-logic?
To begin with, I agree with #df778899 the solution he provides is a robust solution. Although,I will give you an alternative choice if you don't want to work with spring framework and you want to dig it further.
Interceptors provide a powerful and flexible design include invocation monitoring, logging, and message routing. In some sense, the key value you are looking is a kind of RMI-level AOP (aspect-oriented programming) support
Generally speaking:
it is not fair to ask RMI to directly support such capabilities as it
is only a basic remote method invocation primitive, while CORBA ORB is
at a layer close to what the J2EE EJB (Enterprise JavaBean) container
offers. In the CORBA specification, service context is directly
supported at the IIOP level (GIOP, or General Inter-Orb Protocol) and
integrated with the ORB runtime. However, for RMI/IIOP, it is not easy
for applications to utilize the underlying IIOP service-context
support, even when the protocol layer does have such support. At the
same time, such support is not available when RMI/JRMP (Java Remote
Method Protocol) is used. As a result, for RMI-based distributed
applications that do not use, or do not have to use, an ORB or EJB
container environment, the lack of such capabilities limits the
available design choices, especially when existing applications must
be extended to support new infrastructure-level functions. Modifying
existing RMI interfaces often proves undesirable due to the
dependencies between components and the huge impact to client-side
applications. The observation of this RMI limitation leads to the
generic solution that I describe in this article
Despite all the above
The solution is based on Java reflection techniques and some common
methods for implementing interceptors. More importantly, it defines an
architecture that can be easily integrated into any RMI-based
distributed application design.
The solution below is demonstrated through an example implementation that supports the transparent passing of transaction-context data, such as a transaction ID (xid), with RMI. The solution contains the following three important components:
RMI remote interface naming-function encapsulation and interceptor plug-in
Service-context propagation mechanism and server-side interface support
Service-context data structure and transaction-context propagation support
The implementation assumes that RMI/IIOP is used. However, it by no
means implies that this solution is only for RMI/IIOP. In fact, either
RMI/JRMP or RMI/IIOP can be used as the underlying RMI environments,
or even a hybrid of the two environments if the naming service
supports both
Naming-function encapsulation
First we encapsulate the naming function that provides the RMI remote
interface lookup, allowing interceptors to be transparently plugged
in. Such an encapsulation is always desirable and can always be found
in most RMI-based applications. The underlying naming resolution
mechanism is not a concern here; it can be anything that supports JNDI
(Java Naming and Directory Interface). In this example, to make the
code more illustrative, we assume all server-side remote RMI
interfaces inherit from a mark remote interface ServiceInterface,
which itself inherits from the Java RMI Remote interface. Figure
shows the class diagram, which is followed by code snippets that I
will describe further
RMI invocation interceptor
To enable the invocation interceptor, the original RMI stub reference
acquired from the RMI naming service must be wrapped by a local proxy.
To provide a generic implementation, such a proxy is realized using a
Java dynamic proxy API. In the runtime, a proxy instance is created;
it implements the same ServiceInterface RMI interface as the wrapped
stub reference. Any invocation will be delegated to the stub
eventually after first being processed by the interceptor. A simple
implementation of an RMI interceptor factory follows the class diagram
shown in Figure
package rmicontext.interceptor;
public interface ServiceInterfaceInterceptorFactoryInterface {
ServiceInterface newInterceptor(ServiceInterface serviceStub, Class serviceInterfaceClass) throws Exception;
}
package rmicontext.interceptor;
public class ServiceInterfaceInterceptorFactory
implements ServiceInterfaceInterceptorFactoryInterface {
public ServiceInterface newInterceptor(ServiceInterface serviceStub, Class serviceInterfaceClass)
throws Exception {
ServiceInterface interceptor = (ServiceInterface)
Proxy.newProxyInstance(serviceInterfaceClass.getClassLoader(),
new Class[]{serviceInterfaceClass},
new ServiceContextPropagationInterceptor(serviceStub)); // ClassCastException
return interceptor;
}
}
package rmicontext.interceptor;
public class ServiceContextPropagationInterceptor
implements InvocationHandler {
/**
* The delegation stub reference of the original service interface.
*/
private ServiceInterface serviceStub;
/**
* The delegation stub reference of the service interceptor remote interface.
*/
private ServiceInterceptorRemoteInterface interceptorRemote;
/**
* Constructor.
*
* #param serviceStub The delegation target RMI reference
* #throws ClassCastException as a specified uncaught exception
*/
public ServiceContextPropagationInterceptor(ServiceInterface serviceStub)
throws ClassCastException {
this.serviceStub = serviceStub;
interceptorRemote = (ServiceInterceptorRemoteInterface)
PortableRemoteObject.narrow(serviceStub, ServiceInterceptorRemoteInterface.class);
}
public Object invoke(Object proxy, Method m, Object[] args)
throws Throwable {
// Skip it for now ...
}
}
To complete the ServiceManager class, a simple interface proxy cache is implemented:
package rmicontext.service;
public class ServiceManager
implements ServiceManagerInterface {
/**
* The interceptor stub reference cache.
* <br><br>
* The key is the specific serviceInterface sub-class and the value is the interceptor stub reference.
*/
private transient HashMap serviceInterfaceInterceptorMap = new HashMap();
/**
* Gets a reference to a service interface.
*
* #param serviceInterfaceClassName The full class name of the requested interface
* #return selected service interface
*/
public ServiceInterface getServiceInterface(String serviceInterfaceClassName) {
// The actual naming lookup is skipped here.
ServiceInterface serviceInterface = ...;
synchronized (serviceInterfaceInterceptorMap) {
if (serviceInterfaceInterceptorMap.containsKey(serviceInterfaceClassName)) {
WeakReference ref = (WeakReference) serviceInterfaceInterceptorMap.get(serviceInterfaceClassName);
if (ref.get() != null) {
return (ServiceInterface) ref.get();
}
}
try {
Class serviceInterfaceClass = Class.forName(serviceInterfaceClassName);
ServiceInterface serviceStub =
(ServiceInterface) PortableRemoteObject.narrow(serviceInterface, serviceInterfaceClass);
ServiceInterfaceInterceptorFactoryInterface factory = ServiceInterfaceInterceptorFactory.getInstance();
ServiceInterface serviceInterceptor =
factory.newInterceptor(serviceStub, serviceInterfaceClass);
WeakReference ref = new WeakReference(serviceInterceptor);
serviceInterfaceInterceptorMap.put(serviceInterfaceClassName, ref);
return serviceInterceptor;
} catch (Exception ex) {
return serviceInterface; // no interceptor
}
}
}
}
You can find more details in the following guideline here. It is important to understand all this above if you desire to make it from scratch, in contrast with Spring-AOP Framework robust solution.In addition,I think you will find very interesting the Spring Framework plain source code for implementing an RmiClientInterceptor. Take another look here also.
You might consider using Spring with RMI. It offers a simplified way to use transactions. You just set some persistence settings from the #EnableTransactionManagement annotation and it manages transactions for you at a point defined by #Transactional. It can be a class or methods of a class.
example code here
The explanation is in the project README.
I don't know if you've already considered this, one possibility that would fit well with the dual requirement of RMI and transactional boundaries is clearly Spring. An example of Spring Remoting is here.
The #Transactional annotation is very widely used for declarative transaction management - Spring will automatically wrap your bean in an AOP transaction advisor.
These two would then fit together well, for example with a single transactional bean that could be exported as a remote service.
The Spring Remoting link above is based around Spring Boot, which is an easy way to get started. By default Boot would want to bring in an embedded server to the environment, though this can be disabled. Equally there are other options like a standalone AnnotationConfigApplicationContext, or a WebXmlApplicationContext within an existing web application.
Naturally with a question asking for recommendations, there will an element of opinion in any reply - I would be disappointed not to see some other suggestions shortly.
This is not a simple question its just because i'm rethinking our architecture for securing our EJB 3.0 service by a login and security.
We have a EJB3.0 application on JBoss 5.1 that offers various services to a SWT client to read and write data. To use a service, the client must login with a valid user and password which is looked up by SpringSecurity in a LDAP server. SpringSecurity generates a session id which is passed back to the client to be resused in any further service call.
client server
| |
|-> login(user/password)-------->|
| |
| <------- sessionId ------------|
| |
|-->serviceXy(sessionId,param1)->|
The situation seems clear. We store the sessionId in our own context object which is the first parameter of each service method. There is an interceptor on each service method which reads the sessionId from the given context object and checks if the session is still valid. The client needs to call the login service first to get a context object filled with the sessionId and reusue this context object in further service calls.
public class OurContext {
private String sessionId;
}
#Stateless
#Interceptors(SecurityInterceptor.class)
public OurServiceImpl implements OurService {
public void doSomething(OurContext context, String param1) {
[...]
}
}
The thing i don't like at this solution is the polution of each service method with the context parameter.
Isn't there a similar mechanism like a http session in rmi calls? I'm thinking of putting our context object in some kind of session that is created in the client(?) right after the login and is passed to the server on each service call so that the SecurityInterceptor can read the sessionId from this "magic context".
Something like this:
OurContext ctx = service.login("user","password");
Magical(Jboss)Session.put("securContext", ctx);
service.doSomething("just the string param");
Since you are already using an app server, it seems that you should be using the built-in EJB security mechanisms, generally provided through JAAS. On the 4.x jboss line, if you implemented your own JAAS plugin for jboss, you could get access to a "special" context map (similar to what you describe) which is passed along on remote requests (by the jboss remote invocation framework). I haven't used jboss in a while, so not sure how this maps to the 5.1 product, but i have to imagine it has similar facilities. This assumes, of course, that you are willing to implement something jboss specific.
There are some kinds of session mechanisms in EJB, but they all start when the remote call starts, and ends when that ends. On old one is the transaction context ( Adam Bien wrote about this some time ago), and a newer one the CDI Session Scope.
Contrary to popular belief, this scope doesn't just mirror the http session scope, but in absence of an http session (like for remote calls), it represents a single call chain or message delivery (for mdbs).
With such a session, your remote SWT client still has to pass the sessionId to the remote service, but any local beans called from there can pick it up from this "cdi" session.
The other option is kinda like what jtahlborn says: with your own login module you can return a custom principal, instead of the default one. Your code can first request the normal principal and then try to cast it.
The problem is that this stuff is container specific and JBoss always forgets about it. It pretty much breaks after every update, and users have to kick and scream to get it fixed in some next version (only to see it break again in the version after that). Without JBoss really supporting this it's an endless battle.
Yet another option is to let the user login with the sessionId as name. The login module behind that could be a simple module that accepts everything and just puts a principal in the security context with the sessionId as 'name'. It's a little weird, but we've used this succesfully to get any data that can be expressed by a string into the security context. Of course, you would need to let your client do a regular container authentication here, which kinda defeats using Spring security in the first place.
We went for another approach which is portable and does not rely on a specific app server. In addition our security implementation frees us from the restrictions of the EJB approach (which by the way I thought were closed 2 decades ago ... but came up again).
Looking top down:
There is a server providing classes with methods to work on same data.
The client(s) provide the data and invoke specific methods.
Our approach is to put all data (and therefore communication between client and server) into a "Business Object". Every BO extends a superclass. This superclass contains a session id. The login method provides and returns that id. Every client only has to ensure to copy the id received in one BO into the next one it sends to the server. Every method which can be remotely (or locally) invoked, first obtains the session object with the received id. The method to return the session object also checks security constraints (which are based on permissions and not on roles like in the EKB approach).
I am planning to use EJBContext to pass some properties around from the application tier (specifically, a message-driven bean) to a persistence lifecycle callback that cannot directly be injected or passed parameters (session listener in EclipseLink, entity lifecycle callback, etc.), and that callback is getting the EJBContext via JNDI.
This appears to work but are there any hidden gotchas, like thread safety or object lifespan that I'm missing? (Assume the property value being passed is immutable like String or Long.)
Sample bean code
#MessageDriven
public class MDB implements MessageListener {
private #Resource MessageDrivenContext context;
public void onMessage(Message m) {
context.getContextData().put("property", "value");
}
}
Then the callback that consumes the EJBContext
public void callback() {
InitialContext ic = new InitialContext();
EJBContext context = (EJBContext) ic.lookup("java:comp/EJBContext");
String value = (String) context.getContextData().get("property");
}
What I'm wondering is, can I be sure that the contextData map contents are only visible to the current invocation/thread? In other words, if two threads are running the callback method concurrently, and both look up an EJBContext from JNDI, they're actually getting different contextData map contents?
And, how does that actually work - is the EJBContext returned from the JNDI lookup really a wrapper object around a ThreadLocal-like structure ultimately?
I think in general the contract of the method is to enable the communication between interceptors + webservice contexts and beans. So the context should be available to all code, as long as no new invocation context is created. As such it should be absolutely thread-safe.
Section 12.6 of the EJB 3.1 spec says the following:
The InvocationContext object provides metadata that enables
interceptor methods to control the behavior of the invocation chain.
The contextual data is not sharable across separate business method
invocations or lifecycle callback events. If interceptors are invoked
as a result of the invocation on a web service endpoint, the map
returned by getContextData will be the JAX-WS MessageContext
Furthermore, the getContextData method is described in 4.3.3:
The getContextData method enables a business method, lifecycle callback method, or timeout method to retrieve any interceptor/webservices context associated with its invocation.
In terms of actual implementation, JBoss AS does the following:
public Map<String, Object> getContextData() {
return CurrentInvocationContext.get().getContextData();
}
Where the CurrentInvocationContext uses a stack based on a thread-local linked list to pop and push the current invocation context.
See org.jboss.ejb3.context.CurrentInvocationContext. The invocation context just lazily creates a simple HashMap, as is done in org.jboss.ejb3.interceptor.InvocationContextImpl
Glassfish does something similar. It also gets an invocation, and does this from an invocation manager, which also uses a stack based on a thread-local array list to pop and push these invocation contexts again.
The JavaDoc for the GlassFish implementation is especially interesting here:
This TLS variable stores an ArrayList. The ArrayList contains
ComponentInvocation objects which represent the stack of invocations
on this thread. Accesses to the ArrayList dont need to be synchronized
because each thread has its own ArrayList.
Just as in JBoss AS, GlassFish too lazily creates a simple HashMap, in this case in com.sun.ejb.EjbInvocation. Interesting in the GlassFish case is that the webservice connection is easier to spot in the source.
I can't help you directly with your questions regarding EJBContext, since the getContextData method was added in JEE6 there is still not much documentation about it.
There is however another way to pass contextual data between EJBs, interceptors and lifecycle callbacks using the TransactionSynchronizationRegistry. The concept and sample code can be found in this blog post by Adam Bien.
javax.transaction.TransactionSynchronizationRegistry holds a Map-like structure and can be used to pass state inside a transaction. It works perfectly since the old J2EE 1.4 days and is thread-independent.
Because an Interceptor is executed in the same transaction as the ServiceFacade, the state can be even set in a #AroundInvoke method. The TransactionSynchronizationRegistry (TSR) can be directly injected into an Interceptor.
The example there uses #Resource injection to obtain the TransactionSynchronizationRegistry, but it can also be looked up from the InitialContext like this:
public static TransactionSynchronizationRegistry lookupTransactionSynchronizationRegistry() throws NamingException {
InitialContext ic = new InitialContext();
return (TransactionSynchronizationRegistry)ic.lookup("java:comp/TransactionSynchronizationRegistry");
}
I would like to call an ejb3 at runtime. The name of the ejb and the method name will only be available at runtime, so I cannot include any remote interfaces at compile time.
String bean = 'some/Bean';
String meth = 'doStuff';
//lookup the bean
Object remoteInterface = (Object) new InitialContext().lookup(bean);
//search the method ..
// foreach (methods)
// if method == meth, method.invoke(bean);
the beans should be distributed accross multiple application servers, and all beans are to be called remotely.
Any hints? specifically i do not want:
dependency injection
inclusion of appliation specific ejb interfaces in the dispatcher (above)
webservices, thats like throwing out processing power for nothing, all the xml crap
Is it possible to load an ejb3 remote interface over the network (if yes, how?), so I could cache the interface in some hashmap or something.
I have a solution with a remote dispatcher bean, which I can include in the above main dispatcher, which does essentially the same, but just relays the call to a local ejb (which I can lookup how? naming lookup fails). Given the remote dispatcher bean, I can use dependency injection.
thanks for any help
(netbeans and glassfish btw)
ejb3 calls use RMI. RMI supports remote class loading, so i'd suggest looking into that.
also, JMX mbeans support fully untyped, remote invocations. so, if you could use mbeans instead of session beans, that could work. (JBoss, for instance, supports ejb3-like mbeans with some custom annotations).
lastly, many app servers support CORBA invocations, and CORBA supports untyped method invocations.
You might be able to use java.rmi.server.RMIClassLoader for the remote class loading. You'll also need to load any classes that the remote service returns or throws.
It cannot be done. You will always get a "class not found" exception. That is in my opinion the biggest disadvantage of EJB/Java. You loose some of the dynamcis, because the meta-language features are limited.
EJB3 supports RMI/IIOP, but not RMI/JRMP (standard RMI) so dynamic class loading is not supported.You loose some Java generality, but gain other features like being able to communicate about transactions and security.
You need to use reflection for this, and it is very simple to do. Assuming that you're looking for a void method called meth:
Object ejb = ctx.lookup(bean);
for (Method m : ejb.getClass().getMethods()) {
if (m.getName().equals(meth) && m.getParameterTypes().length == 0) {
m.invoke(service);
}
}
If you're looking for a specific method signature just modify the m.getParameterTypes() test accordingly, e.g. for a method with a single String parameter you can try:
Arrays.equals(m.getParameterTypes(), new Class[]{String.class})
And then pass an Object[] array with the actual arguments to the m.invoke() call:
m.invoke(service, new Object[]{"arg0"})