I want to add validations to a Java Bean. For example, I want to do the following:
#MaxLength(50)
#RequiredField
public void setEmployeeName(String name){
.....
}
I know I can write code that gets the validations for a specific method by calling method.getDeclaredAnnotation after all the bean values have been set. I would like to avoid writing this code
Is there anything in Java6 that gives standard validations via annotations? Do I need aspectj to invoke these annotations?
thanks in advance.
You can use Bean Validation Framework. Here is short overview
http://relation.to/Bloggers/BeanValidationSneakPeekPartI
take a look at JSR 303. The RI (Reference Implementation) is here, with also a nice tutorial. And no, you don't need AspectJ.
The only way you'll be able to do this is through reflections and a custom validation utility/interceptor/proxy. JSR 303 and JSR 305 were proposed to introduce similar functionality, but nothing like this exists.
One of the problems you'll run into is that these annotations need to be handled at some sort of framework level, or at a minimum, intercepted before some sort of invoked action. The two most common sense, brute force ways of doing this would be done either by creating a utility, or by validating pre-invoke in an invocation handler (proxy).
The reality is that unless this is built into Spring, Struts, Guice, Java itself, etc., you're just creating unnecessary overhead and you're better off checking for validation bounds on demand.
Related
I have a component declared using the #Component annotation, in which there is a set of methods that implement communication with another api, in my product there are operations that are prohibited for a user with an anonymous id. I want to create an annotation, for example #ProhibitedForAnonym, which, every time the method is called, will check the ID of the anonymous customer, with the ID in the method parameter and throw an error if the IDs match. But I don't understand how to do annotation processing in OSGI, maybe some kind of interceptor?
There is no general interception framework in OSGi. However, you could do interception in the following ways:
Don't. Personally, I feel that since we've lambdas a code-based solution has won hands on over a 'magic' annotation check. It is about the same number of characters but a lambda based call allows me to single step, provide context to the security check, does not suffer from the THIS problem, is testable, and requires no complex framework with lots of bug opportunities.
Use the byte code weaving support in OSGi. You need to register a weaver early and then weave any class that has these annotations. You can take a look at https://github.com/aQute-os/biz.aQute.osgi.util/tree/master/biz.aQute.trace for an example of how to use the byte code weaver. Make sure your weaver is there first. If you use bndtools you can add it to the -runpath to run before anybody else. Or use start levels.
Use proxying. You can 'hide' and original service with the Service Hooks and then register a proxy. In the proxy you can then do the annotation check. This also requires that this code runs first and cannot be updated. I think the spec has an example of this
You might want to read: https://www.aqute.biz/appnotes/interceptors.html
I checked out this SO Post which discusses using RequestMapping in interface. Although the post contains ways to achieve this but it does not mention the pros and cons of doing this.
Architecture wise , is this a bad idea to use controller as interface?
What benefit will we achieve in terms of polymorphism for controller?
There is nothing wrong with putting #RequestMapping on the interface. However make sure you have the right reasons to do it. Polymorphism is probably not a good reason, you will not have a different concrete implementation swapped in at runtime or something like that.
On the other hand, for example, Swagger codegen generates interfaces with #RequestMapping and all the annotations on the methods, fields and return types (together with #Api definitions etc.). Your controller then implements this interface. In this case it makes a lot of sense because it is just enforcing you to respect the Swagger / OpenAPI interface definition originally defined in Yaml. There is a nice side-effect that it makes your controller much cleaner. (Clients can also use the same Yaml to generate their own client stubs for their own language frameworks).
If you opt to do this, make sure you use the latest version of the Spring Framework, because there were some bugs which were fixed only very recently, where not all annotations were being inherited.
https://github.com/spring-projects/spring-framework/issues/15682
If you are stuck with an older Spring version, you might need to repeat the same annotations in your controller.
So, the real reason this would make sense is to enforce the interface contract, and separate the interface definition (together with any information pertaining to the interface) from the actual concrete implementation.
While some arguments against this are that
the request mapping is an implementation detail, or
since you only have one active controller implementation, you might as well put it on the implementation,
(others will probably be provided in different answers soon,)
I was recently faced with the same decision to put jax-rs annotations on the interface or the implementation. So, since everything always "depends" on some context, I want to give you an argument for putting the RequestMapping (or e.g. #Path, etc if not using spring) on the interface:
If you are not using HATEOAS or discovering the endpoints via some other means, the endpoint url, http method, etc. are usually fixed and a static part of your backend API. Therefore, you might as well put it on an interface. This was the case for me because I control both the client and the server side.
The controller usually has only one active implementation, so the reason for doing so is not polymorphism. But your implementation usually has a lot more dependencies than the plain interface. So if you export/provide only your interface to clients (e.g. in a seperate jar/java project/...), you only provide things that the clients really require. In my specific case, I delivered the annotated interface so that a client implementation could can it using a Rest-Client-Library and detect the endpoint paths automatically.
I want to use AOP concept to time execution time of some methods that I mark with an annotation that I created. My problem however is that I refer to the annotated method internally, from within the same class. For example:
public void login(params) {
some logic ...
performLogin();
some logic ...
}
#Measured
public void performLogin() {
some logic ...
}
This is a known issue caused by the fact that Spring AOP is using proxy based approach that does not "see" the internal calls within the same class. Apparently I can solve this situation by using AspectJ instead of Spring AOP. If I understand correctly, it can be configured from within Spring itself. From what I found, it looks like I should include #EnableAspectJAutoProxy annotation to configure Spring to use AspectJ instead of its own AOP. Unfortunately, it did not help and after adding the annotation, the interception of the annotated method did not occur.
There is a lot of information on this topic in Spring reference documentation and I got a bit lost. Is there anything else I am supposed to do so that AspectJ will be used?
P.S. Please note that I cannot refactor the whole class and move the calling method outside.
P.P.S. I also verified my pointcut configuration. I annotated the calling method which is invoked externally and it worked fine.
Proxies can only achieve a sub-set of the full capabilities of the actual AspectJ system, basically advice that wraps methods. Due to their nature proxies have following limitations:
interception on external calls only (while breaching proxy boundary)
interception on public members only (private/protected can't be intercepted)
unawareness to local calls (or calls with this or super)
<aop:aspectj-autoproxy /> is not enough - it only wraps methods, you need something like this: <context:load-time-weaver/>
If you want to be able to advise fields for example, you would need to enable the use of Native AspectJ.
I would like to build my own custom DI framework based on Java annotations and I need a little direction to get started. I know it would be much easier to use one of the many wonderful frameworks out there such as guice or spring, but for the sake of my own curiosity, i'd like to build my own.
I'm not very familiar with annotations, so i'm having a bit of trouble finding resources and would really appreciate someone just sort of spelling out a few of the steps i'll need to take to get started.
As fore mentioned, id like to take a factory approach and somehow label my getters with an #Resource or #Injectable type annotation, and then in my business classes be able to set my variable dependencies with an #Inject annotation and have the resource automatically available.
Does anyone have any sort of resource they can pass along to help me understand the process of tagging methods based on annotations and then retrieving values from a separate class based on an annotation. A little direction is all I need, something to get me started. And of course i'll be happy to post a little code sample here once I get going, for the sake of others future reading of course.
EDIT
The resources I am using to put this together:
Java Reflection: Annotations
How to find annotations in a given package: Stack Overflow ?
Scanning Annotations at Runtime
I have not actually finished writing this yet, but the basic task list is going to be as follows (for anyone who might be interested in doing something similar in the future)
At class runtime scan for all #Inject fields and get object type.
Scan all classes (or just a specific package of classes (I haven't
decided yet)) for annotated methods #InjectableResource.
Loop all annotated methods and find the method that returns the
object type I am looking for.
Run the method and get the dependency.
It will also be helpful to note that when scanning all the classes I will be using a library called Javassist. Basically what this does is allows me to read the bytecode information of each class without actually loading the class. So I can read the annotation strings without creating serious memory problems.
Interesting that you want to build your own. I love Google Guice - it makes code so elegant and simple.
I've used this guide before which I found pretty useful for learning about annotations and how you can pull them out of classes and methods.
You will have to define your own Annotations which is done using #interface. Then you will have to define some kind of class for doing bindings e.g. where you see an interface bind in this concrete class. Finally, you will need some logic to pull it altogether e.g. go through each class, find each annotation, and then find a suitable binding.
Give consideration to things like lazy instantiation through Reflections and singletons. Guice, for example, allows you to use a singleton so your only using one instance of the concrete class, or you can bind a new version each time.
Good luck!
Have a look at the following methods:
java/lang/Class.html#getAnnotation(java.lang.Class)
java/lang/Class.html#getAnnotations()
java/lang/Class.html#getDeclaredAnnotations()
Methods of the same name also exist for the java/lang/reflect/Method, java/lang/reflect/Field and java/lang/reflect/Constructor classes.
So in order to use these sorts of methods, you need to know a bit about Java reflection.
As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.