I am using Spring's declarative transactions (the #Transactional annotation) in "aspectj" mode. It works in most cases exactly like it should, but for one it doesn't. We can call it Lang (because that's what it's actually called).
I have been able to pinpoint the problem to the load time weaver. By turning on debug and verbose logging in aop.xml, it lists all classes being woven. The problematic class Lang is indeed not mentioned in the logs at all.
Then I put a breakpoint at the top of Lang, causing Eclipse to suspend the thread when the Lang class is loaded. This breakpoint is hit while the LTW weaving other classes! So I am guessing it either tries to weave Lang and fails and doesn't output that, or some other class has a reference that forces it to load Lang before it actually gets a chance to weave it.
I am unsure however how to continue to debug this, since I am not able to reproduce it in smaller scale. Any suggestions on how to go on?
Update: Other clues are also welcome. For example, how does the LTW actually work? There appears to be a lot of magic happening. Are there any options to get even more debug output from the LTW? I currently have:
<weaver options="-XnoInline -Xreweavable -verbose -debug -showWeaveInfo">
I forgot tom mention it before: spring-agent is being used to allow LTW, i.e., the InstrumentationLoadTimeWeaver.
Based on the suggestions of Andy Clement I decided to inspect whether the AspectJ transformer is ever even passed the class. I put a breakpoint in ClassPreProcessorAgent.transform(..), and it seems that the Lang class never even reaches that method, despite it being loaded by the same class loader as other classes (an instance of Jetty's WebAppClassLoader).
I then went on to put a breakpoint in InstrumentationLoadTimeWeaver$FilteringClassFileTransformer.transform(..). Not even that one is hit for Lang. And I believe that method should be invoked for all loaded classes, regardless of what class loader they are using. This is starting to look like:
A problem with my debugging. Possibly Lang is not loaded at the time when Eclipse reports it is
Java bug? Far-fetched, but I suppose it does happen.
Next clue: I turned on -verbose:class and it appears as if Lang is being loaded prematurely - probably before the transformer is added to Instrumentation. Oddly, my Eclipse breakpoint does not catch this loading.
This means that Spring is new suspect. there appears to be some processing in ConfigurationClassPostProcessor that loads classes to inspect them. This could be related to my problem.
These lines in ConfigurationClassBeanDefinitionReader causes the Lang class to be read:
else if (metadata.isAnnotated(Component.class.getName()) ||
metadata.hasAnnotatedMethods(Bean.class.getName())) {
beanDef.setAttribute(CONFIGURATION_CLASS_ATTRIBUTE, CONFIGURATION_CLASS_LITE);
return true;
}
In particular, metadata.hasAnnotatedMethods() calls getDeclaredMethods() on the class, which loads all parameter classes of all methods in that class. I am guessing that this might not be the end of the problem though, because I think the classes are supposed to be unloaded. Could the JVM be caching the class instance for unknowable reasons?
OK, I have solved the problem. Essentially, it is a Spring problem in conjunction with some custom extensions. If anyone comes across something similar, I will try to explain step by step what is happening.
First of all, we have a custom BeanDefintionParser in our project. This class had the following definition:
private static class ControllerBeanDefinitionParser extends AbstractSingleBeanDefinitionParser {
protected Class<?> getBeanClass(Element element) {
try {
return Class.forName(element.getAttribute("class"));
} catch (ClassNotFoundException e) {
throw new RuntimeException("Class " + element.getAttribute("class") + "not found.", e);
}
}
// code to parse XML omitted for brevity
}
Now, the problem occurs after all bean definition have been read and BeanDefinitionRegistryPostProcessor begins to kick in. At this stage, a class called ConfigurationClassPostProcessor starts looking through all bean definitions, to search for bean classes annotated with #Configuration or that have methods with #Bean.
In the process of reading annotations for a bean, it uses the AnnotationMetadata interface. For most regular beans, a subclass called AnnotationMetadataVisitor is used. However, when parsing the bean definitions, if you have overriden the getBeanClass() method to return a class instance, like we had, instead a StandardAnnotationMetadata instance is used. When StandardAnnotationMetadata.hasAnnotatedMethods(..) is invoked, it calls Class.getDeclaredMethods(), which in turn causes the class loader to load all classes used as parameters in that class. Classes loaded this way are not correctly unloaded, and thus never weaved, since this happens before the AspectJ transformer registered.
Now, my problem was that I had a class like so:
public class Something {
private Lang lang;
public void setLang(Lang lang) {
this.lang = lang;
}
}
Then, I had a bean of class Something that was parsed using our custom ControllerBeanDefinitionParser. This triggered the wrong annotation detection procedure, which triggered unexpected class loading, which meant that AspectJ never got a chance to weave Lang.
The solution was to not override getBeanClass(..), but instead override getBeanClassName(..), which according to the documentation is preferable:
private static class ControllerBeanDefinitionParser extends AbstractSingleBeanDefinitionParser {
protected String getBeanClassName(Element element) {
return element.getAttribute("class");
}
// code to parse XML omitted for brevity
}
Lesson of the day: Do not override getBeanClass unless you really mean it. Actually, don't try to write your own BeanDefinitionParser unless you know what you're doing.
Fin.
If your class is not mentioned in the -verbose/-debug output, that suggests to me it is not being loaded by the loader you think it is. Can you be 100% sure that 'Lang' isn't on the classpath of a classloader higher in the hierarchy? Which classloader is loading Lang at the point in time when you trigger your breakpoint?
Also, you don't mention AspectJ version - if you are on 1.6.7 that had issues with ltw for anything but a trivial aop.xml. You should be on 1.6.8 or 1.6.9.
How does ltw actually work?
Put simply, an AspectJ weaver is created for each classloader that may want to weave code. AspectJ is asked if it wants to modify the bytes for a class before it is defined to the VM. AspectJ looks at any aop.xml files it can 'see' (as resources) through the classloader in question and uses them to configure itself. Once configured it weaves the aspects as specified, taking into account all include/exclude clauses.
Andy Clement
AspectJ Project Lead
Option 1) Aspect J is open source. Crack it open and see what is going on.
Option 2) Rename your class to Bang, see if it starts working
I would not be surprised if there is hard coding to skip "lang' in there, though I can't say why.
Edit -
Seeing code like this in the source
if (superclassnameIndex > 0) { // May be zero -> class is java.lang.Object
superclassname = cpool.getConstantString(superclassnameIndex, Constants.CONSTANT_Class);
superclassname = Utility.compactClassName(superclassname, false);
} else {
superclassname = "java.lang.Object";
}
Looks like they are trying to skip weaving of java.lang.stuff.... don't see anything for just "lang" but it may be there (or a bug)
Related
Is that possible implement the same code but only enabled when adding a dependency to SpringBoot project?
If possible, how to achieve it?
I want to implement the code like this:
DoSomethingUtil doSomethingUtil = new DoSomethingUtil();
doSomethingUtil.send("API URL", "System A", "Hello");
It would do nothing when project didn't add the implement of the DoSomethingUtil.java.
After adding to pom.xml that which would implement the DoSomethingUtil.java, it would really do something.
Given that you don't need to know about DoSomethingUtil anywhere else in your code, you can run something on it only if it's present in your classpath (without importing it) if you use reflection all the way:
try {
Class<?> dsuClass = Class.forName("do.something.util.DoSomethingUtil");
Object dsuInstance = dsyClass.getConstructor().newInstance();
Method sendMethod = dsuClass.getDecaredMethod("send", String.class, String.class, String.class);
sendMethod.invoke(dsuInstance, "API URL", "System A", "Hello");
} catch (Exception ignored) {}
You may want to revisit the poor error handling above to distinguish (at least) between class not being present in the classpath and send() method invocation failure.
What you appear to be describing is adding a dependency, not "importing" something.
Will it work?
Sort of. What you could do is overlay the definition of the.pkg.DoSomethingUtil with another version of the.pkg.DoSomethingUtil in a different JAR file. It can work, but it makes your application sensitive to the order of the JARs on the runtime classpath. That makes your application fragile ... to say the least.
You can probably make this work with classic Java if you have full control of the runtime classpath. However:
I'm not sure if it will work with SpringBoot.
If you tried this sort of thing on Android, the APK builder would protest. It treats the scenario of two classes with the same full name as an error.
I think there is a better solution:
Refactor the code so that there is a DoSomethingUtil interface and two classes; e.g. RealDoSomethingUtil and DummyDoSomethingUtil.
Replace new DoSomethingUtil() with a call to a factory method.
Implement the factory method something like this:
private static Class<?> doSomethingClass;
public static synchronized DoSomethingUtil makeDoSomethingUtil() {
if (doSomethingClass == null) {
try {
doSomethingClass = Class.forName("the.pkg.RealDoSomethingUtil");
} catch (Exception ex) {
doSomethingClass = the.pkg.DummyDoSomethingUtil.class;
}
}
return (DoSomethingUtil) (doSomethingClass.newInstance());
}
Put RealDoSomethingUtil into the add-on JAR file, and DoSomethingUtil, RealDoSomethingUtil and the factory method into the main JAR file.
You should probably make the exception handling more selective so that it deals with different classloader errors differently. For example, if RealDoSomethingUtil exists but can't be loaded, you probably should log that ... or maybe let the exception crash the application.
You could also make use of ServiceLoader, but I don't know if it would be simpler ...
The java Service Provide API (SPI) is there to detect wether implementation(s) of an interface exists.
You have a jar with an interface DoSomethingUtil in your application.
Possibly on the class path an implementation jar (MyDoSomethingUtilImpl implements DoSomethingUtil), with an entry in META-INF/services.
You must check whether the interface is implemented.
One could make a fallback implementation.
So, this is something of a follow-on of this question. My current code looks something like this:
#Configuration
#EnableAutoConfiguration
#ComponentScan(basePackages = {"base.pkg.name"})
public class MyApp implements ServletContextAware {
private ThingDAO beanThingDAO = null;
public MyApp() {
// Lots of stuff goes here.
// no reference to servletContext, though
// beanThing gets initialized, and mostly populated.
}
#Bean publicD ThingDAO getBeanThingDAO() { return beanThingDAO; }
public void setServletContext(ServletContext servletContext) {
// all references to servletContext go here, including the
// bit where we call the appropriate setters in beanThingDAO
{
}
The problem is, it's not working. Specifically, my understanding was that setServletContext was supposed to be called by various forms of Spring Magic at some point during the startup process, but (as revealed by System.out.println()) it never gets called. I'm trying to finish up the first stage of a major bunch of refactoring, and for the moment it is of notable value to me to be able to handle the access to servletContext entirely inside of the #Configuration file. I'm not looking for an answer that will tell me that I should put it in the controllers. I'm looking for an answer that will either tell me how to get it working inside of the #Configuration file, or explain why that won't work, and what I can do about it.
I just ran into a very similar issue and while I'm not positive it's exactly the same problem I thought I'd record my solution in case it's helpful to others.
In my case I had a single #Configuration class for my spring boot application that implemented both ServletContextAware and WebMvcConfigurer.
In the end it turns out that Spring Boot has a bug (or at least undocumented restraint) that ServletContextAware.setServletContext() will never be called if you also implement WebMvcConfigurer on the same class. The solution was simply to split out a separate #Configuration class to implement ServletContextAware.
Here's a simple project I found that demonstrates and explains more what the problem was for me:
https://github.com/dvntucker/boot-issue-sample
The OP doesn't show that the bean in question was implementing both of these, but given the OP is using simplified example code I thought maybe the fact that the asker could have been implementing both interfaces in his actual code and might have omitted the second interface.
Well, I have an answer. It's not one I'm particularly happy with, so I won't be accepting it, but if someone with my same problem stumbles across this question, I want to at least give them the benefit of my experience.
For some reason, the ServletContextAware automatic call simply doesn't work under those circumstances. It works for pretty much every other component, though. I created a kludge class that looks something like this:
// This class's only purpose is to act as a kludge to in some way get
// around the fact that ServletContextAware doesn't seem to work on MyApp.
// none of the *other* spring boot ways of getting the servlet context into a
// file seem to work either.
#Component
public class ServletContextSetter implements ServletContextAware {
private MyApp app;
public ServletContextSetter(MyApp app) {
this.app = app;
}
#Override
public void setServletContext(ServletContext servletContext) {
app.setServletContext(servletContext);
}
}
Does the job. I don't like it, and I will be rebuilding things later to make it unnecessary so I can take it out, but it does work. I'm going to hold the checkmark, though, in case anyone can tell me either how to make it work entirely inside the #Configuration - decorated file, or why it doesn't work there.
Note that the #Component decorator is important, here. Won't work without it.
OK so this is some kind of theoretical question for you guys.
I am experimenting with cglib's Enchancer - creating a proxy for a class.
My code is running in a Felix OSGi container.
The hierarchy looks kind of similar to that:
// Bundle A;
// Imports-Package: javax.xml.datatype
// Exports-Package: a.foo
package a.foo;
public class Parent {
protected javax.xml.datatype.XMLGregorianCalendar foo;
... -> getter/setter;
}
// Bundle B
// Imports-Package: a.foo
// DOES NOT IMPORT PACKAGE javax.xml.datatype !!!
package b.bar;
import a.foo.Parent;
public class Child extends Parent {
protected String bar;
... -> getter/setter;
}
// Bundle B
// Code extracted from https://github.com/modelmapper/modelmapper/blob/master/core/src/main/java/org/modelmapper/internal/ProxyFactory.java#L59
public Child enchance() {
Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(Child.class);
enhancer.setUseFactory(true);
enhancer.setUseCache(true);
enhancer.setNamingPolicy(NAMING_POLICY);
enhancer.setCallbackFilter(METHOD_FILTER);
enhancer.setCallbackTypes(new Class[] { MethodInterceptor.class, NoOp.class });
try {
return enhancer.createClass();
} catch (Throwable t) {
t.printStackTrace();
}
}
From OSGi point of view - the two bundles - Bundle A and Bundle B are fully functional.
The package imports/exports are bnd generated. Although BundleA does not import explicitly the javax.xml.datatype package - I can create instances of Child without any problem.
So far so good.
But when I try to call the enchance() method and create a Child proxy - cglib throws a NoClassDefFoundError: javax.xml.datatype.XMLGregorianCalendar
OK, I get this - BundleB's classloader indeed cannot load this class and in fact - cglib's Enchancer seems to be using BundleB's classloader (Child's class type classloader) in order to create the proxy.
On the other hand - for handling modularity the OSGi container is doing the so called classloading delegation - instead of BundleB's classloader, the OSGi runtime delegates the loading of the parent class Parent to BundleA's classloader, which knows how to load all of its fields.
This is why BundleB does not need to explicitly import the javax.xml.datatype package and does not need to know how to load the XMLGregorianCalendar class and still be able to work with Child objects.
I was wondering - isn't such "delegating" approach suitable in the cglib's use case as well?
Please note that I don't know ANYTHING about byte code manipulation and that might sound like a very stupid question to some.
But I really don't understand - why isn't cglib able to delegate loading of the Parent to Parent's own classloader?
Is such mechanism really not available in cglib? Why? Is cglib not used in combination with OSGi? If so then why?
The Child class does not need to import javax.xml.datatype so long as it does not access the javax.xml.datatype.XMLGregorianCalendar field and you are just using the Child class in the normal way. However in order to generate a proxy class, CGLib will need to have visibility of the internals of the full inheritance hierarchy including the javax.xml.datatype.XMLGregorianCalendar in order to generate the bytecode for the new type. Therefore an import of the package will be required.
Unfortunately bnd cannot predict that you will be doing bytecode generation on the Child class so it does not add the import of javax.xml.datatype – it only add the imports required for normal usage.
In general it is a bad idea to inherit from a class imported from another bundle. Java inheritance creates a very tight coupling from the subclass to the superclass, which means you are exposed to the internals of the superclass.
To your last question: CGLib is fairly widely used in OSGi for things like mocking objects during testing. It is less used in production because there is nearly always a better solution than bytecode generation, such as proper usage of the service registry.
I tried combining the OSGi Class Loader Bridge idea that is described here:
https://www.infoq.com/articles/code-generation-with-osgi
... that solves a similar problem with code generation frameworks running within OSGi, with another idea that came to me recently.
The idea is to keep track of class loaders of class types that are found in the parent type hierarchy of the user's type. We can later use these class loaders as fallback for loading types that are otherwise unknown to the Bundle's class loader of the user's type.
We can then tell CGLIB's Enhancer to use this new class loader for resolving.
The idea is presented here:
https://github.com/modelmapper/modelmapper/pull/294
I would love to hear the opinion of experienced OSGi specialists about this though.
But so far this seems to work.
Until proven wrong, I accept my own answer.
I've got a project that has gwt-log logging lines scattered throughout. Now I'm trying to write some unit tests and nothing seems to be working.
Any class I test that uses the gwt-log facility causes the following exception to be raised:
Caused by: com.googlecode.gwt.test.exceptions.GwtTestConfigurationException:
A custom Generator should be used to instanciate
'com.allen_sauer.gwt.log.client.LogMessageFormatter',
but gwt-test-utils does not support GWT compiler API,
so you have to add our own GwtCreateHandler with
'GwtTest.addGwtCreateHandler(..)' method or to declare your
tested object with #Mock
I have no need for the logger to function during unit tests, I'd prefer to mock it away.
I've attempted to use Mockito to mock the logger, in a few different ways... obviously I have no idea what I'm doing here, none of the following code snippets helped the situation:
public class ClockTest extends GwtTest {
#Mock private LogMessageFormatter lmf;
...
or
...
#Before
public void init() throws Exception {
LogMessageFormatter lmf = mock(LogMessageFormatter.class);
...
Any clues on how to work this out would be most appreciated!
Colin is right, you have 2 ways to deal with your error :
1) Mock the LogMessageFormatter, or at a higher level, mock your Logger instance. gwt-test-utils provides a simple API for mocking with both Mockito or EasyMock : http://code.google.com/p/gwt-test-utils/wiki/MockingClasses
2) provide your own GwtCreateHandler to instanciate the LogMessageFormatter, or at a higher your own Logger instance.
Internally, gwt-log relies on GWT's deferred binding to instanciate a LogMessageFormatter object based on your configuration, which is parsed at compile time. It use GWT's generator API to create the LogMessageFormatter class, but gwt-test-utils is not able to use those kind of Generators.
You'll have to do it "by hand", with gwt-test-utils deferred binding support : GwtCreateHandlers.
Your "LoggerGwtCreateHandler" could use JDK's InvocationHandler and Proxy classes to write a proxy for the Logger interface which would simply silent each method call, since I guess you won't care about any log call in your tests.
Here is a discussion on how to write a GwtCreateHandler : https://groups.google.com/forum/?fromgroups#!topic/gwt-test-utils-users/r_cbPsw9nIE
From the error message you posted:
you have to add our own GwtCreateHandler with
'GwtTest.addGwtCreateHandler(..)' method or to declare your
tested object with #Mock
These are the two options you have to proceed. I've only just begun to work with gwt-test-utils, but the main premise is that it doesn't run the GWT compiler or Dev Mode, so it needs other ways to handle implementing 'magic' features like GWT.create. Its method is to either require you to mock the instance (this should be a fairly common idea in most of your tests for other objects involved in testing) or to provide something like a generator, and hook it up using GwtTest.addGwtCreateHandler.
Building a mock logger shouldn't be too bad, nor should implementing GwtCreateHandler - you just need to make something that has all the log methods. If you want the logging to work, then those methods need to actually invoke some other logger, like java.util.Logger, log4j, slf4j, etc but that is not required for just getting the tests to run (but may be handy for making sure that you logging works, or finding out why your test is failing.
for those still in pain with this damn problem here is what I managed to get (With a lot of pain too ...). It'll solve the conflict between Gwt-test-utils and Gwt-log.
You're of course welcome to modify the format method ;) :
#Before
public void correctLog() {
this.addGwtCreateHandler(new GwtCreateHandler() {
#Override
public Object create(Class<?> classLiteral) throws Exception {
if (classLiteral.isAssignableFrom(LogMessageFormatter.class)) {
return new LogMessageFormatter() {
#Override
public String format(String logLevelText, String category,
String message, Throwable throwable) {
return message + " : " + throwable.getLocalizedMessage();
}
};
}
return null;
}
});
}
I think I've discovered a kind of Schrödinger's cat problem in my code. The body of a function is never executed if I change one line within the body of that same function; but if I leave that line alone, the function executes. Somehow the program knows ahead of time what the body is, and decides not to call it...
I'm working on an Eclipse RCP application in Java, and have need to use their Error Handling System. According to the page linked,
There are two ways for adding handlers to the handling flow.
using extension point org.eclipse.ui.statusHandlers
by the workbench advisor and its method {#link WorkbenchAdvisor#getWorkbenchErrorHandler()}.
So I've gone into my ApplicationWorkbenchAdvisor class, and overridden the getWorkbenchErrorHandler method:
#Override
public synchronized AbstractStatusHandler getWorkbenchErrorHandler()
{
System.out.println("IT LIVES!");
if (myErrorHandler == null)
{
AbstractStatusHandler delegate = super.getWorkbenchErrorHandler();
MyStatusHandler otherThing = new MyStatusHandler(delegate);
myErrorHandler = otherThing;
}
return myErrorHandler;
}
The MyStatusHandler is meant to act as a wrapper for the delegate handler. I've re-named the class for anonymity. As it is, above, this function is never called. The println never happens, and even in debug mode with breakpoints, they never trigger. Now the wierd part: If I change the line that assigns the myErrorHandler to
myErrorHandler = delegate;
then the function is called; multiple times, in fact!
This problem has me and two java-savvy coworkers stumped, so I'm hoping the good people of SO can help us!
As it turned out, my problem was that the MyErrorHandler class was defined in a different plugin, which presumably wasn't fully loaded yet. That doesn't seem to add up entirely, but once I moved the class definition of my error handler into the same plugin that was calling it during startup, the problems went away.