Intercepting method calls in java osgi - java

I'm currently working on a backup and restore mechanism for an OSGi (java) based platform and would like to do the following
BUNDLE A - Some package:
void methodDefinedByInterface(Class1 a, Class2 b){
...
}
I'd like to be able to add something like an annotation to this method as follows:
#Backup
void methodDefinedByInterface(Class1 a, Class2 b){
...
}
So that I can gather the class + method information and also the variable data itself in another bundle so I can back that data up, "Method call on class blabla in package blabla with parameters .. .. ..".
Is this possible within OSGi? I've read up on AspectJ but most information I found seemed quite dated. Or can I add an implementation to the target platform?

See the Weaving Hook specification in the OSGi Core spec. You can implement the hook and weave your annotations into loaded classes as well as add the necessary dynamic import package statements to that the classes have visibility to the package(s) containing your annotations.

Related

Java 9 default methods in interfaces: from which modules are they invoked?

suppose you have moduleA and moduleB. ModuleA defines an interface (for instance for a Service) and ModuleB has a concrete class that implements the interface (provides the service).
Now if the interface has a default method and you invoke it on the class in moduleB (from another module) is this invocation supposed to be performed inside moduleA or moduleB?
Apparently it is from moduleA ... what's the rationale?
Example: suppose you have a code that does this:
InputStream is = this.getClass().getResourceAsStream(fullPath);
if this code lies in the implementation of the service in moduleB the stream will be opened. But if the code lies in the default method in moduleA then when the service is invoked on moduleB you will need to have an "open" resource in moduleB (so it seems that the invocation thinks it is from "outside" moduleB).
would like to read about the reason for that.
thanks
editing my question with an example.
Suppose you have in moduleA this code:
public interface PropertiesProvider {
public default Properties get(String domain) {
Class clazz =this.getClass();
System.out.println(" CLASS " +clazz);
InputStream is = clazz.getResourceAsStream(domain) ;
if (is != null) {
Properties props = new Properties();
try {
props.load(is);
return props;
} catch (IOException e) {
//log
}
}
return null;
}
}
and in moduleB
public class PropertiesProviderImpl implements PropertiesProvider {}
if you invoke the service from ModuleA the call is traced to come from class PropertiesProviderImpl finds the resource but does not load it if it is not "opened"
if you copy the code into PropertiesProviderImpl the calls is traced to that same class finds the ressource and loads it even when it is not "opened"
So my question is: why the difference since the call comes from the same class?
(the difference being that in one case the method is kind-of inherited from the default method in the interface)
Look at the documentation of the getResourceAsStream If this class is in a named Module then this method will attempt to find the resource in the module.
In the first case your code (in moduleA) sees the Type but cannot see the class which implements your Type, because it's in the moduleB. In the second case your code can see the class which "implements" the Type.
Look at the reference bellow, the most important sentences are:
In a modular setting the invocation of Class::forName will continue to work so long as the package containing the provider class is known to the context class loader. The invocation of the provider class’s constructor via the reflective newInstance method, however, will not work: The provider might be loaded from the class path, in which case it will be in the unnamed module, or it might be in some named module, but in either case the framework itself is in the java.xml module. That module only depends upon, and therefore reads, the base module, and so a provider class in any other module will be not be accessible to the framework.
[...]
instead, revise the reflection API simply to assume that any code that reflects upon some type is in a module that can read the module that defines that type.
[Long answer]: reflective-readability
A framework is a facility that uses reflection to load, inspect, and instantiate other classes at run time [...]
Given a class discovered at run time, a framework must be able to access one of its constructors in order to instantiate it. As things stand, however, that will usually not be the case.
The platform’s streaming XML parser, e.g., loads and instantiates the implementation of the XMLInputFactory service named by the system property javax.xml.stream.XMLInputFactory, if defined, in preference to any provider discoverable via the ServiceLoader class. Ignoring exception handling and security checks the code reads, roughly:
String providerName
= System.getProperty("javax.xml.stream.XMLInputFactory");
if (providerName != null) {
Class providerClass = Class.forName(providerName, false,
Thread.getContextClassLoader());
Object ob = providerClass.newInstance();
return (XMLInputFactory)ob;
}
// Otherwise use ServiceLoader
...
In a modular setting the invocation of Class::forName will continue to work so long as the package containing the provider class is known to the context class loader. The invocation of the provider class’s constructor via the reflective newInstance method, however, will not work: The provider might be loaded from the class path, in which case it will be in the unnamed module, or it might be in some named module, but in either case the framework itself is in the java.xml module. That module only depends upon, and therefore reads, the base module, and so a provider class in any other module will be not be accessible to the framework.
To make the provider class accessible to the framework we need to make the provider’s module readable by the framework’s module. We could mandate that every framework explicitly add the necessary readability edge to the module graph at run time, as in an earlier version of this document, but experience showed that approach to be cumbersome and a barrier to migration.
We therefore, instead, revise the reflection API simply to assume that any code that reflects upon some type is in a module that can read the module that defines that type. This enables the above example, and other code like it, to work without change. This approach does not weaken strong encapsulation: A public type must still be in an exported package in order to be accessed from outside its defining module, whether from compiled code or via reflection.
since we didn't understand precisely the previous response we carried some additional tests
in each test the resource file is not "opened"
1)
the code invoking clazz.getResouceAsStream is in default method of interface defining the service. The class implementing the interface does not defines any method.
-> this.getClass() yields the implementing class , tests fails to find resource
2)
we added this code in the default method
Object obj = clazz.getConstructor().newInstance();
and yes it fails
3) we changed the code so PropertiesProvider is abstract class and PropertiesProviderImpl inherits from it
same behaviour.
So yes it means that the same code will behave differently if you inherit from it or just invoke it directly.
This is worrying: it means the inner logic of the language is going to lead to convoluted byzantine behaviours (the reason why we dumped C++).

cglib - creating class proxy in OSGi results in NoClassDefFoundError

OK so this is some kind of theoretical question for you guys.
I am experimenting with cglib's Enchancer - creating a proxy for a class.
My code is running in a Felix OSGi container.
The hierarchy looks kind of similar to that:
// Bundle A;
// Imports-Package: javax.xml.datatype
// Exports-Package: a.foo
package a.foo;
public class Parent {
protected javax.xml.datatype.XMLGregorianCalendar foo;
... -> getter/setter;
}
// Bundle B
// Imports-Package: a.foo
// DOES NOT IMPORT PACKAGE javax.xml.datatype !!!
package b.bar;
import a.foo.Parent;
public class Child extends Parent {
protected String bar;
... -> getter/setter;
}
// Bundle B
// Code extracted from https://github.com/modelmapper/modelmapper/blob/master/core/src/main/java/org/modelmapper/internal/ProxyFactory.java#L59
public Child enchance() {
Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(Child.class);
enhancer.setUseFactory(true);
enhancer.setUseCache(true);
enhancer.setNamingPolicy(NAMING_POLICY);
enhancer.setCallbackFilter(METHOD_FILTER);
enhancer.setCallbackTypes(new Class[] { MethodInterceptor.class, NoOp.class });
try {
return enhancer.createClass();
} catch (Throwable t) {
t.printStackTrace();
}
}
From OSGi point of view - the two bundles - Bundle A and Bundle B are fully functional.
The package imports/exports are bnd generated. Although BundleA does not import explicitly the javax.xml.datatype package - I can create instances of Child without any problem.
So far so good.
But when I try to call the enchance() method and create a Child proxy - cglib throws a NoClassDefFoundError: javax.xml.datatype.XMLGregorianCalendar
OK, I get this - BundleB's classloader indeed cannot load this class and in fact - cglib's Enchancer seems to be using BundleB's classloader (Child's class type classloader) in order to create the proxy.
On the other hand - for handling modularity the OSGi container is doing the so called classloading delegation - instead of BundleB's classloader, the OSGi runtime delegates the loading of the parent class Parent to BundleA's classloader, which knows how to load all of its fields.
This is why BundleB does not need to explicitly import the javax.xml.datatype package and does not need to know how to load the XMLGregorianCalendar class and still be able to work with Child objects.
I was wondering - isn't such "delegating" approach suitable in the cglib's use case as well?
Please note that I don't know ANYTHING about byte code manipulation and that might sound like a very stupid question to some.
But I really don't understand - why isn't cglib able to delegate loading of the Parent to Parent's own classloader?
Is such mechanism really not available in cglib? Why? Is cglib not used in combination with OSGi? If so then why?
The Child class does not need to import javax.xml.datatype so long as it does not access the javax.xml.datatype.XMLGregorianCalendar field and you are just using the Child class in the normal way. However in order to generate a proxy class, CGLib will need to have visibility of the internals of the full inheritance hierarchy including the javax.xml.datatype.XMLGregorianCalendar in order to generate the bytecode for the new type. Therefore an import of the package will be required.
Unfortunately bnd cannot predict that you will be doing bytecode generation on the Child class so it does not add the import of javax.xml.datatype – it only add the imports required for normal usage.
In general it is a bad idea to inherit from a class imported from another bundle. Java inheritance creates a very tight coupling from the subclass to the superclass, which means you are exposed to the internals of the superclass.
To your last question: CGLib is fairly widely used in OSGi for things like mocking objects during testing. It is less used in production because there is nearly always a better solution than bytecode generation, such as proper usage of the service registry.
I tried combining the OSGi Class Loader Bridge idea that is described here:
https://www.infoq.com/articles/code-generation-with-osgi
... that solves a similar problem with code generation frameworks running within OSGi, with another idea that came to me recently.
The idea is to keep track of class loaders of class types that are found in the parent type hierarchy of the user's type. We can later use these class loaders as fallback for loading types that are otherwise unknown to the Bundle's class loader of the user's type.
We can then tell CGLIB's Enhancer to use this new class loader for resolving.
The idea is presented here:
https://github.com/modelmapper/modelmapper/pull/294
I would love to hear the opinion of experienced OSGi specialists about this though.
But so far this seems to work.
Until proven wrong, I accept my own answer.

Organizing a Struts2 project (Which Packages)

Currently my project developed using simple JSP and servlets has the following packages
1-Business package (Contains summed up methods from service package under a business rule)
2-Service package (Contains different services and their implementation - along with factory
method to call a specific implementation of each service)
3-Controller package (All the servlet controls ..)
3-Views (All the jsps)
4-CustomTags (Contain the Custom Tags)
5-Domain (Contains Domain objects)
Now I am planning to implement the same project using struts2 could you tell me what packages should i introduce. I know the service and business package will remain intact what about the controller package ? Should i place all the actions in the controller package ? Any suggestion will be appreciated.
Do not organise all your classes based on their type, they should be organised or grouped together with their immediate collaborators. If you can help it, place XAction and XController together in the same package. Its silly to place XAction in a separate package with 49 other actions that really have no relation while its controller is somewhere else.
If you group collaborators together in the same package its quite easy to know the working group and be reasonably more confident that changing one probably affects the other. With your original suggestion, who really knows what Action works with what Controller and so on.
Is possible!
Struts from 2.0 to 2.3.x (I used theses versions), if you use the annotations struts2-convention-plugin.jar dependency, you can do that:
The package default (generally is zx.yz.actions) mapped all Actions on the project and it is your package namespace from image above.
When you create a new package inner Actions package, zx.yz.actions.example for instance, you are creating a new namespace /servletContext/example in your application.
To disable it, you only need put a '/' before your "Action()" annotation method. For example:
public class ExampleAction {
#Action(value="/example",
#Result(name="ok", type="httpheader", params={"status", "200"})
public String execute() {
}
}
The '/' in '/example', will put in de namespace default.

How does Spring's #Autowired work with interfaces that have no implementation?

I am working with SpringData's Neo4j graph DB hello-worlds example and I ran across the following code in WorldRepositoriesImpl.java...
#Autowired private WorldRepository worldRepository;
Furthermore, WorldRepository is defined as...
public interface WorldRepository extends MyWorldRepository,
GraphRepository<World>,
NamedIndexRepository<World>
{/* no method defined here */}
Now the odd part, no class that I can find actually implements WorldRepository.So, a few questions...
How is this possible? Where is this documented? Is there a way to make this a bit more explicit (less mysterious)?
Running the code with a debugger attached shows that the worldRepository instance wired up by Spring is a proxy object created at runtime.
Looking at the pom.xml and the dependencies included, it looks like the spring-neo4j library bundles in some Aspects that create this implementation class at runtime.
In other words, there is no implementation of this interface declared in the source code - but one is created at runtime with AspectJ and other tools.

AspectJ Load time weaver doesn't detect all classes

I am using Spring's declarative transactions (the #Transactional annotation) in "aspectj" mode. It works in most cases exactly like it should, but for one it doesn't. We can call it Lang (because that's what it's actually called).
I have been able to pinpoint the problem to the load time weaver. By turning on debug and verbose logging in aop.xml, it lists all classes being woven. The problematic class Lang is indeed not mentioned in the logs at all.
Then I put a breakpoint at the top of Lang, causing Eclipse to suspend the thread when the Lang class is loaded. This breakpoint is hit while the LTW weaving other classes! So I am guessing it either tries to weave Lang and fails and doesn't output that, or some other class has a reference that forces it to load Lang before it actually gets a chance to weave it.
I am unsure however how to continue to debug this, since I am not able to reproduce it in smaller scale. Any suggestions on how to go on?
Update: Other clues are also welcome. For example, how does the LTW actually work? There appears to be a lot of magic happening. Are there any options to get even more debug output from the LTW? I currently have:
<weaver options="-XnoInline -Xreweavable -verbose -debug -showWeaveInfo">
I forgot tom mention it before: spring-agent is being used to allow LTW, i.e., the InstrumentationLoadTimeWeaver.
Based on the suggestions of Andy Clement I decided to inspect whether the AspectJ transformer is ever even passed the class. I put a breakpoint in ClassPreProcessorAgent.transform(..), and it seems that the Lang class never even reaches that method, despite it being loaded by the same class loader as other classes (an instance of Jetty's WebAppClassLoader).
I then went on to put a breakpoint in InstrumentationLoadTimeWeaver$FilteringClassFileTransformer.transform(..). Not even that one is hit for Lang. And I believe that method should be invoked for all loaded classes, regardless of what class loader they are using. This is starting to look like:
A problem with my debugging. Possibly Lang is not loaded at the time when Eclipse reports it is
Java bug? Far-fetched, but I suppose it does happen.
Next clue: I turned on -verbose:class and it appears as if Lang is being loaded prematurely - probably before the transformer is added to Instrumentation. Oddly, my Eclipse breakpoint does not catch this loading.
This means that Spring is new suspect. there appears to be some processing in ConfigurationClassPostProcessor that loads classes to inspect them. This could be related to my problem.
These lines in ConfigurationClassBeanDefinitionReader causes the Lang class to be read:
else if (metadata.isAnnotated(Component.class.getName()) ||
metadata.hasAnnotatedMethods(Bean.class.getName())) {
beanDef.setAttribute(CONFIGURATION_CLASS_ATTRIBUTE, CONFIGURATION_CLASS_LITE);
return true;
}
In particular, metadata.hasAnnotatedMethods() calls getDeclaredMethods() on the class, which loads all parameter classes of all methods in that class. I am guessing that this might not be the end of the problem though, because I think the classes are supposed to be unloaded. Could the JVM be caching the class instance for unknowable reasons?
OK, I have solved the problem. Essentially, it is a Spring problem in conjunction with some custom extensions. If anyone comes across something similar, I will try to explain step by step what is happening.
First of all, we have a custom BeanDefintionParser in our project. This class had the following definition:
private static class ControllerBeanDefinitionParser extends AbstractSingleBeanDefinitionParser {
protected Class<?> getBeanClass(Element element) {
try {
return Class.forName(element.getAttribute("class"));
} catch (ClassNotFoundException e) {
throw new RuntimeException("Class " + element.getAttribute("class") + "not found.", e);
}
}
// code to parse XML omitted for brevity
}
Now, the problem occurs after all bean definition have been read and BeanDefinitionRegistryPostProcessor begins to kick in. At this stage, a class called ConfigurationClassPostProcessor starts looking through all bean definitions, to search for bean classes annotated with #Configuration or that have methods with #Bean.
In the process of reading annotations for a bean, it uses the AnnotationMetadata interface. For most regular beans, a subclass called AnnotationMetadataVisitor is used. However, when parsing the bean definitions, if you have overriden the getBeanClass() method to return a class instance, like we had, instead a StandardAnnotationMetadata instance is used. When StandardAnnotationMetadata.hasAnnotatedMethods(..) is invoked, it calls Class.getDeclaredMethods(), which in turn causes the class loader to load all classes used as parameters in that class. Classes loaded this way are not correctly unloaded, and thus never weaved, since this happens before the AspectJ transformer registered.
Now, my problem was that I had a class like so:
public class Something {
private Lang lang;
public void setLang(Lang lang) {
this.lang = lang;
}
}
Then, I had a bean of class Something that was parsed using our custom ControllerBeanDefinitionParser. This triggered the wrong annotation detection procedure, which triggered unexpected class loading, which meant that AspectJ never got a chance to weave Lang.
The solution was to not override getBeanClass(..), but instead override getBeanClassName(..), which according to the documentation is preferable:
private static class ControllerBeanDefinitionParser extends AbstractSingleBeanDefinitionParser {
protected String getBeanClassName(Element element) {
return element.getAttribute("class");
}
// code to parse XML omitted for brevity
}
Lesson of the day: Do not override getBeanClass unless you really mean it. Actually, don't try to write your own BeanDefinitionParser unless you know what you're doing.
Fin.
If your class is not mentioned in the -verbose/-debug output, that suggests to me it is not being loaded by the loader you think it is. Can you be 100% sure that 'Lang' isn't on the classpath of a classloader higher in the hierarchy? Which classloader is loading Lang at the point in time when you trigger your breakpoint?
Also, you don't mention AspectJ version - if you are on 1.6.7 that had issues with ltw for anything but a trivial aop.xml. You should be on 1.6.8 or 1.6.9.
How does ltw actually work?
Put simply, an AspectJ weaver is created for each classloader that may want to weave code. AspectJ is asked if it wants to modify the bytes for a class before it is defined to the VM. AspectJ looks at any aop.xml files it can 'see' (as resources) through the classloader in question and uses them to configure itself. Once configured it weaves the aspects as specified, taking into account all include/exclude clauses.
Andy Clement
AspectJ Project Lead
Option 1) Aspect J is open source. Crack it open and see what is going on.
Option 2) Rename your class to Bang, see if it starts working
I would not be surprised if there is hard coding to skip "lang' in there, though I can't say why.
Edit -
Seeing code like this in the source
if (superclassnameIndex > 0) { // May be zero -> class is java.lang.Object
superclassname = cpool.getConstantString(superclassnameIndex, Constants.CONSTANT_Class);
superclassname = Utility.compactClassName(superclassname, false);
} else {
superclassname = "java.lang.Object";
}
Looks like they are trying to skip weaving of java.lang.stuff.... don't see anything for just "lang" but it may be there (or a bug)

Categories