I looked for a clean CDI solution and not a WELD dependent one but so far nothing...
I need to test if every element of a list of objects that I get with #Inject #Any MyInterface beans is a proxy, and when true I need to get the real object to do introspection and get all the properties of the object.
My WELD implementation:
MyInterface interf = obj;
if (isProxy(interf )) {
interf = (Config) ((TargetInstanceProxy)interf ).getTargetInstance();
}
where isProxy is so defined (CDI solution?):
public boolean isProxy(Object obj) {
try{
return Class.forName("org.jboss.weld.bean.proxy.ProxyObject").isInstance(obj);
} catch (Exception e) {
LOGGER.error("Unable to check if object is proxy", e);
}
return false;
}
Any suggestions /Indications. In the official documentation I found no mention of introspection (here)
And then I would like to get all the properties of the bean with something like this:
Arrays.stream(interf.getClass().getDeclaredFields()).forEach(
field -> extractStuff(...)
);
We use Wildfly and WELD but don't want to bind us to an implementation of CDI.
Thanks in advance!
EDIT:
The question is, more precisely: Do you know a clean CDI solution that WELD is already implementing with TargetInstanceProxy? Not if I need to go back to school or if I understand what I'm writing.. Thanks for taking time to help!
CDI is intentionally hiding (or rather not exposing) the internals as they should be unimportant to end user when programming against interface.Furthermore messing with this can cause weird errors as you should always be invoking methods via proxy, not the actual instance.
So the short answer is - no, there is no pure CDI way to do this.
(At least not an intended one.)
However, seeing that you are using Weld already, there are other ways. Weld comes with pretty much any EE server excepting TomEE, so depending on Weld API should be pretty safe.
Now why am I saying this - in Weld 3.x (WildFly 12+), the API was extended to contain WeldConstruct and WeldClientProxy which are interfaces implemented by either Weld sublasses (interceptors/decorators) and/or client proxies - see javadocs of those classes for more information.
So if you must do this, then you could add a dependency on Weld API as such:
<dependency>
<groupId>org.jboss.weld</groupId>
<artifactId>weld-api</artifactId>
<version>x.y.z</version>
</dependency>
And then, in your code, you can check if injected object is a proxy by doing:
#Inject
Foo foo;
public void doSomething() {
if (foo instanceof WeldClientProxy) {
// foo is a proxy
} else {
// not a proxy
}
}
If you want to obtain actual instances, WeldClientProxy allows you to obtain Metadata from which you can the retrieve the underlying contextual instance. That's the closest I can get you to what you are after.
One common option is to:
get the Bean of the instance you want to unwrap
get its scope (bean.getScope())
from the bean manager (injectable) get the Context associated to the scope
do a context.get(bean) to get the unwrapped instance if available in CDI context (there are some cases you can't get it at all).
Related
I know I can inject as an instance all the beans that match the interface and then choose between them programmatically :
#Inject #Any Instance<PaymentProcessor> paymentProcessorSource;
That means I have to put the selecting logic into the client.
Can I, as an alternative, cache the value of the ejb using lexical scoping with lambda expression? Will the container be able to correctly manage the lifecycle of the ejb in that case or is this practice to avoid?
For example, having PaymentProcessorImpl1 e PaymentProcessorImpl2 as two strategies of PaymentProcessor, something like that:
public class PaymentProcessorProducer {
#Inject
private PaymentProcessorImpl1 paymentProcessorImpl1;
#Inject
private PaymentProcessorImpl2 paymentProcessorImpl2;
#Produces
private Function<String, PaymentProcessor> produce() {
return (strategyValue) -> {
if ("strategy1".equals(strategyValue)) {
return paymentProcessorImpl1;
} else if ("strategy2".equals(strategyValue)) {
return paymentProcessorImpl2;
} else {
throw new IllegalStateException("Tipo non gestito: "
+ strategyValue);
}
};
}
}
and then into the client to something like that:
#Inject
Function<String, PaymentProcessor> paymentProcessor;
...
paymentProcessor.apply("strategy1")
Can I, as an alternative, cache the value of the ejb using lexical scoping with lambda expression?
Theoretically, you could do this. Whether it works is easy to try on our own.
Will the container be able to correctly manage the lifecycle of the ejb in that case or is this practice to avoid?
What exactly is an EJB here? The implementation of PaymentProcessor? Note that EJB beans are different from CDI beans. As in CDI container does not control lifecycle of EJB beans, it "only provides a wrapper for you to use them as if they were CDI beans".
That being said, the lifecycle is still the same - in your case the producer is creating #Dependent bean meaning every time you inject Function<String, PaymentProcessor>, the producer will be invoked.
What poses certain problem is that you create an assumption on two or more context being active at any given time. The moment you decide to actually apply() the function, the scope within which your implementation(s) exist may or may not be active. If they are both ApplicationScoped for instance, you should be alright. If, however, they are SessionScoped and you happen to timeout/invalidate session before applying function, then you get into a very weird state.
This is probably why I would rather avoid this approach and go with qualifiers. Or you can introduce a new bean which has both strategies in it and have a method with an argument which decides which strategy to use.
Env:
Wildfly 8.2.0 Final
JDK 8
Java EE 7
Please note that by 'POJO' i am referring to the classes that serve the other classes i.e other than value objects, entities.
This question was on back of my head for some time. Just wanted to put it out.
Based on CDI and Managed Beans specs and various other books/articles, its pretty clear that CDI injection starts with a 'managed' bean instance. By 'managed' i mean servlet, EJBs etc. which are managed by a container. From there, it injects POJOs (kind of crawl through layers) till every bean gets its dependencies. This all makes very sense to me and i see very little reason why developers ever need to use "new" to create an instance of their dependent POJO's.
One scenario that comes to my mind is when developer would like to have logic similar to
if(something) {
use-heavy-weight-A-instance
} else {
use-heavy-weight-B-instance
}
But, that also can be achieved via #Produces.
Here is one scenario that i verified to be true in wildfly 8.2.0 Final i.e. CDI is not able to inject bean when the JSP has
<%!
#Inject
BeanIntf bean;
%>
But, the alternative to use a servlet works fine.
That said, would like to know if there is any scenario(s) where a developer has to use 'new'. As i understand, by using 'new', developer owns the responsibility of fulfilling dependencies into that bean and all its dependent beans, and their dependent beans etc..
Thanks in advance,
Rakesh
When using CDI or other container you don't use new, because you expect a bunch of service coming from the container.
For CDI these main services are:
Injection of dependent beans (get existing instance or create a new
instance)
Lifecycle callback management (#PostConstruct and
#PreDestroy)
Lifecycle management of your instance (a #RequestScoped bean will make container produce an instance leaving until the end of request)
Applying interceptors and decorators on your instance
Registering and managing observers methods
Registering and managing producers methods
Now, on some rare occasion, you may want to add a part of these services to a class you instantiate yourself (or that another framework like JPA instantiate for you).
BeanManager bm = CDI.current().getBeanManager();
AnnotatedType<MyClass> type = bm.createAnnotatedType(MyClass.class);
InjectionTarget<MyClass> it = bm.getInjectionTargetFactory(type).createInjectionTarget(null);
CreationalContext<MyClass> ctx = bm.createCreationalContext(null);
MyClass pojo = new MyClass();
injectionTarget.inject(instance, ctx); // will try to satisfied injection points
injectionTarget.postConstruct(instance); // will call #PostConstruct
With this code you can instantiate your own MyClass containing injection points (#Inject) and lifecycle callbacks (#PostConstruct) and having these two services honored by the container.
This feature is used by 3rd party frameworks needing a basic integration with CDI.
The Unmanaged class handle this for you, but still prevent you to do the instantiation ;).
We are using Spring's TransactionInterceptor to set some database partition information using ThreadLocal whenever a DAO method marked with the #Transactional annotation is executed. We need this to be able to route our queries to different database partitions.
This works fine for most DAO methods:
// this causes the invoke method to set a thread-local with the host name of
// the database server the partition is on
#Transactional
public int deleteAll() throws LocalDataException {
The problem is when we need to reference the DAO proxy object itself inside of the DAO. Typically we have to have the caller pass in the proxy-dao:
public Pager<Foo, Long> getPager(FooDao proxyDao) {
This looks like the following in code which is obviously gross.
fooDao.getPager(fooDao);
The problem is that when we are inside of FooDao, the this is not the proxy DAO that we need.
Is there a better mechanism for a bean to discover that it has a proxy wrapper around it? I've looked at the Spring AOPUtils but I see no way to find the proxy for an object. I don't want isAopProxy(...) for example. I've also read the Spring AOP docs but I can't see a solution there unless I implement my own AOP native code which I was hoping to avoid.
I suspect that I might be able to inject the DAO into itself with a ApplicationContextAware utility bean and a setProxyDao(...) method, but that seems like a hack as well. Any other ideas how I can detect the proxy so I can make use of it from within the bean itself? Thanks for any help.
A hacky solution along the lines of what you have suggested, considering that AspectJ compile time or load time weaving will not work for you:
Create an interface along these lines:
public interface ProxyAware<T> {
void setProxy(T proxy);
}
Let your Dao's implement the ProxyAware implementation, now create a BeanPostProcessor with an Ordered interface to run last, along these lines:
public class ProxyInjectingBeanPostProcessor implements BeanPostProcessor, Ordered {
#Override
public Object postProcessBeforeInitialization(Object bean, String beanName) {
return bean;
}
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) {
if (AopUtils.isAopProxy((bean))){
try {
Object target = ((Advised)bean).getTargetSource().getTarget();
if (target instanceof ProxyAware){
((ProxyAware) target).setProxy(bean);
}
} catch (Exception e) {
// ignore
}
}
return bean;
}
#Override
public int getOrder() {
return Ordered.LOWEST_PRECEDENCE;
}
}
It is ugly, but works.
There is a handy static utility AopContext.currentProxy() method provided by Spring which returns a proxy to object from which it was called.
Although using it is considered a bad practice, semantically the same method exists in Java EE as well: SessionContext.getBusinessObject().
I wrote few articles about this utility method and various pitfalls: 1, 2, 3.
Use Spring to inject a bean reference into the bean, even the same bean, just as you would for any other bean reference. No special action required.
The presence of such a variable explicitly acknowledges in the class design that the class expects to be proxied in some manner. This is not necessarily a bad thing, as aop can change behavior that breaks the class contract.
The bean reference would typically be for an interface, and that interface could even be a different one for the self-referenced internal methods.
Keep it simple. That way lies madness. :-)
More importantly, be sure that the semantics make sense. The need to do this may be a code smell that the class is mixing in multiple responsibilities best decomposed into separate beans.
Now my colleagues work on logging subsystem and they want to bind separate operations, that was initiated from some business method. For example, if method from bean A calls to some method in bean B and then in bean C it will be great to know than business methods in bean B and bean C does some staff for method from bean A. Especially it will be great to know that methods from B and C done some unit of work for concrete call of bean A.
So, the question is how to tie this units of work into something total? Obviously, it is not beautiful to use method arguments for binding!
And also I think that it is time to ask another question, that is close enough to previous one. What if I want to propagate some context information from bean A to another beans, that are called from A? Something like security credentials and security principal? What can I do?
May be questions that I asked is some kind of bad practice?
Looks like a good use case for mdc, available in both Logback and Log4J. Essentially you are attaching some custom value to a thread and all logging messages comming from that thread can attach that value to the message.
I think the best way to implement this in EJB will be an interceptor:
public class MdcInterceptor {
#AroundInvoke
public Object addMdcValue(InvocationContext context) throws Exception {
MDC.put("cid", RandomStringUtils.randomAlphanumeric(16));
try {
return context.proceed();
} finaly {
MDC.remove("cid");
}
}
}
Now all you have to do is add:
%X{user}
to your logging pattern (logback.xml or log4j.xml).
See also
Logging user activity in web app
For general purpose context information you can use TransactionSynchronizationRegistry. It could look something like this:
#Stateless
public class MyBean {
#Resource
TransactionSynchronizationRegistry registry;
#AroundInvoke
public Object setEntryName(InvocationContext ic) throws Exception {
registry.putResource(NAME, "MyBean");
return ic.proceed();
}
}
#Stateless
public class MyBean2 {
#Resource
TransactionSynchronizationRegistry registry;
public void doJob() {
String entryName = (String)registry.getResource(NAME);
...
}
}
I believe it is usually implemented using ThreadLocal variables as normally each transaction maps to a sigle thread in application servers. So if TransactionSynchronizationRegistry is not implemented in your AS (like e.g. in JBoss 4.2.3) or you need lower level tool, you could use ThreadLocal variables directly.
BTW I guess that MDC utilities use the same thing under the covers.
I have an application which is part JavaEE (the server side) part JavaSE (the client side). As I want that client to be well architectured, I use Weld in it to inject various components. Some of these components should be server-side #EJB.
What I plan to do is to extend Weld architecture to provide the "component" allowing Weld to perform JNDI lookup to load instances of EJBs when client tries to reference them. But how do I do that ?
In other worrds, I want to have
on the client side
public class ClientCode {
public #Inject #EJB MyEJBInterface;
}
on the server-side
#Stateless
public class MyEJB implements MyEJBInterface {
}
With Weld "implicitely" performing the JNDI lookup when ClientCode objects are created. How can I do that ?
Basically, doing so requires write a so-called portable CDI extension.
But, as it is quite long and requires a few tweaks, let me explain it further.
Portable extension
Like weld doc explains, first step is to create a class that implements the Extension tagging interface, in which one will write code corresponding to interesting CDI events. In that precise case, the most interesting event is, to my mind, AfterBeanDiscovery. Indeed, this event occurs after all "local" beans have been found by CDI impl.
So, writing extension is, more opr less, writing a handler for that event :
public void loadJndiBeansFromServer(
#Observes AfterBeanDiscovery beanDiscovery, BeanManager beanManager)
throws NamingException, ClassNotFoundException, IOException {
// Due to my inability to navigate in server JNDI naming (a weird issue in Glassfish naming)
// This props maps interface class to JNDI name for its server-side
Properties interfacesToNames = extractInterfacesToNames();
// JNDI properties
Properties jndiProperties = new Properties();
Context context = new InitialContext();
for (Entry<?, ?> entry : interfacesToNames.entrySet()) {
String interfaceName = entry.getKey().toString();
Class<?> interfaceClass = Class.forName(interfaceName);
String jndiName = entry.getValue().toString();
Bean<?> jndiBean = createJndIBeanFor(beanManager, interfaceClass, jndiName, jndiProperties);
beanDiscovery.addBean(jndiBean);
}
}
Creating the bean is not a trivial operation : it requires transforming "basic" Java reflection objects into more advanced weld ones (well, in my case)
private <Type> Bean<Type> createJndIBeanFor(BeanManager beanManager, Class<Type> interfaceClass,
String jndiName, Properties p) {
AnnotatedType<Type> annotatedType = beanManager
.createAnnotatedType(interfaceClass);
// Creating injection target in a classical way will fail, as interfaceClass is the interface of an EJB
JndiBean<Type> beanToAdd = new JndiBean<Type>(interfaceClass, jndiName, p);
return beanToAdd;
}
Finally, one has to write the JndiBean class. But before, a small travel in annotations realm is required.
Defining the used annotation
At first, I used the #EJB one. A bad idea : Weld uses qualifier annotation method calls result to build hashcode of bean ! So, I created my very own #JndiClient annotation, which holds no methods, neither constants, in order for it to be as simple as possible.
Constructing a JNDI client bean
Two notions merge here.
On one side, the Bean interface seems (to me) to define what the bean is.
On the other side, the InjectionTarget defines, to a certain extend, the lifecycle of that very bean.
From the literature I was able to find, those two interfaces implementations often share at least some of their state. So I've decided to impelment them using a unique class : the JndiBean !
In that bean, most of the methods are left empty (or to a default value) excepted
Bean#getTypes, which must return the EJB remote interface and all extended #Remote interfaces (as methods from these interfaces can be called through this interface)
Bean#getQualifiers which returns a Set containing only one element : an AnnotationLiteral corresponding to #JndiClient interface.
Contextual#create (you forgot Bean extended Contextual, didn't you ?) which performs the lookup :
#Override
public T create(CreationalContext<T> arg0) {
// Some classloading confusion occurs here in my case, but I guess they're of no interest to you
try {
Hashtable contextProps = new Hashtable();
contextProps.putAll(jndiProperties);
Context context = new InitialContext(contextProps);
Object serverSide = context.lookup(jndiName);
return interfaceClass.cast(serverSide);
} catch (NamingException e) {
// An unchecked exception to go through weld and break the world appart
throw new LookupFailed(e);
}
}
And that's all
Usage ?
Well, now, in my glassfish java client code, I can write things such as
private #Inject #JndiClient MyRemoteEJB instance;
And it works without any problems
A future ?
Well, for now, user credentials are not managed, but I guess it could be totally possible using the C of CDI : Contexts ... oh no ! Not contexts : scopes !
Section 3.5 of the CDI spec should help out. You may want to use some of the properties on the EJB annotation as well. Also, (probably don't need to tell you this) make sure you have JNDI set up correctly on the client to reference the server, and pack any of the needed interfaces into your client jar.