What is the most common use for AOP in spring project - java

After reviewing the AOP pattern, I'm overwhelmed with the ways of how and what to use it for in my spring project.
I'd like to use it as audit log system of all the financial business logic. It just seems to be easy to integrate. But I'd like to hear your take on this.
The question is - what other uses should I consider that are common for this pattern? I would not mind refactoring my current logic to be used with AOP as long as there is benefits to it.

The most common usage is where your application has cross cutting concerns i.e. a piece of logic or code that is going to be written in multiple classes/layers.
And this could vary based on your needs. Some very common examples of these could be:
Transaction Management
Logging
Exception Handling (especially when you may want to have detailed traces or have some plan of recovering from exceptions)
Security aspects
Instrumentation
Hope that helps.

Besides logging/auditing and declarative transaction handling as mentioned by Axel, I would say another usage of AOP is as a request interceptor. For example, let's say you need all requests coming of a server to be intercepted so that you can do something with it (may be to keep track of which app is sending what request to what other app or what database, etc).

The most common use is probably the declarative transaction handling using #Transactional.

Using AOP for audit logging is a perfectly valid use of AOP. You can turn it off for testing and change it as requirements change in production.
The only downside in this case is if you were planning on doing the audit log via SQL. It may be more performant to implement this kind of auditing as triggers directly in the DB.

You can use AOP for your security concerns, for example allow/disallow method access. Another usage of aop is to test your application performance.

It can be used to expose custom metrics (Instrumentation of service) for Alerting and Monitoring of service using client libraries like dropwizard, prometheus.
It helped us, to
Keep this instrumentation code (Not a business logic) outside of actual business logic
Keep these cross-cutting concerns at one single place.
Declaratively apply them wherever required.
For example,
To expose
Total bytes returned by REST AIP - (Can be done in after advice)
Total time taken by REST API i.e server-in and server-out rime- (Can be done using around advice).

As an answer slightly different from what #Axel said, using it to automatically intercept all of your data access calls and apply transactions appropriately is phenomenal. I have mine set up to implement all calls to my dao package that don't start with "get" in a transaction and then anything performed in a method starting with "get" is treated as read only. It's fantastic because aside from the initial setup, I don't have to worry about it, just follow the naming convention.

Related

Adapt standalone application to use Spring

I've read about how to use Spring in standalone applications but I'm not sure what should be the approach for refactoring a large code base of 120,000 lines for making the change as gradual as possible.
As far as I understand Spring won't inject anything in an object unless that object is managed by the application context. If this is true, I think I have two choices:
1- Start refactoring from the main class down, but this means complicated scenarios will appear soon.
2- Share the application context statically so that I can start refactoring the simplest things, scalating in difficulty when I'm ready.
I'm not a fan of static access so I would try to avoid that choice, but I don't know if it's a good idea to start with the huge classes that are loaded at startup. Any ideas of the best approach?
By the way, is it OK to inject Swing components until I can fix the dependencies?
I think that before approaching such a big technology change, it may be a good idea to start asking yourself if you are following the architecture that Spring guides you to have when you start using it from the beginning.
Therefore, is your application based on the MVC pattern?
If not, maybe your product is not yet ready for being refactored to
use Spring. In this case, I would suggest refactoring the product
design first, so that it complies with the MVC architectural pattern.
If yes, then I would proceed with a use-case-based approach, starting
from the use cases that required a complicated design and
implementation.
E.g. I would look for very important entity classes or business classes containing a lot of logic. This way, you can reduce the risk of doing a lot of refactoring before realizing that, for example, Spring is not a good fit for the core of your product.
After identifying the most critical use case, you can start to experiment how refactoring works on your current product by introducing Spring from end to end on a single critical scenario (user input - business logic - entity manipulation - persistence). If you are successful, then you keep refactoring, otherwise you can go back and try to understand where you need to change your current product before introducing Spring.
Of course, this works when you have some experience with Spring and you do not have to cope with newcomer's issues. If you are new to Spring, then I would recommend getting some experience with Spring before starting the adventure of refactoring such a big project.
Start simply and wire new code/class with spring. You'll amend your existing main method to initialise the ApplicationContext and load your new feature. Over time then as change requests arrive you'll refactor and migrate the existing codebase to use spring dependency injection.

Using AOP with Spring Security ACL

I'm quite new to the Spring Security framework, and especially ACLs. After a few hours of reading I think I grasped most of what I need to do to start securing my application.
However something bothers me: while it's very easy to find usage descriptions on how to read the ACL permissions (via #PreAuthorize for example), it starts getting confusing when you want to create and persist them.
The Spring Security manual tells us they don't want to bother with any standard but we are encouraged to use AOP. Many tutorials and answers here rather use the AclService directly inside their business code, destroying the "separation of concerns" principle in the process.
So what should I do ? How do the pros do ? Should I try defining pointcuts on custom annotations for creation/deletion of ACL entries ? Or should I "pollute" my code with ACL concerns ?
Alright so I now understand much better, after one week of work, how these things work.
First, one shall try to stick with the simple, naive way of using ACLs using the AclService directly inside each service layer method. Building an abstraction helps a lot (basically a grantAccess(username, object, permission,...) method in a #Service bean).
Once everything is secured with ACLs writes and #PreAuthorize/#PostAuthorize/#Secured el tests, then you can start thinking about AOP to clean up your code from all the security concerns. You make up a list of service method using ACL writes and you add Advices to them to have one central place where all the security is handled.
Spring Security ACL is extremely easy to set up and understand, even on an existing project with existing users (you'll have to build a migration script).

CDI interceptors and memcache

I was reading about interceptors and AOP, the way they can unclutter your code and externalize cross-cutting concerns into aspects. I instantly thought of CDI and the use of custom interceptors to access cache everytime one tries to access the database.
Is there any library that already implements this and supports memcache? I think calls to the entitymanager should be intercepted.
IMHO, if you want to go that way, you need a pretty good reason to justify why Hibernate Cache / JBoss Cache (just guessing about your technology stack, but there are products / solution for almost all stacks) won't fit you needs?
You certainly don't want to reinvent the wheel in terms of developing your own query- or object cache, don't you?
In general, using memcached to directly avoid DB requests is very difficult to get right and inefficient. You really want to cache higher level concepts such as DAO -> DTO boundaries.
I've used AOP to inject cache invalidation and observer management code in java programs pretty successfully. AOP allows me to think of a different set of reusability of different parts of my code. It doesn't mean I don't have to design these aspects, but it frees me of limitations and prevents me from cutting and pasting, etc...
So my recommendation would be to design this access pattern such that you have to do a bunch of work at each of these boundaries, and then design cross cuts that inject that work at compile time.

Is there an acceptable way to keep these layers/dependencies separate?

I am currently struggling with whether or not I've achieved a good level of separation, or if I've missed the point somewhere, as I am relatively new to learning the disciplined side of development...
My goal when I started was to create a layer that was agnostic of any persistence mechanism - I called this data-api. I then implemented these interfaces using JDO, and called this project data-jdo. The logic layer ideally talks only is aware of data-api.
This is the point where I'm not sure what makes sense. The business logic layer has to be invoked somehow, right? So is the expectation that the implementation of the data-api (data-jdo, or something else depending on experimentation) is provided (appropriate to say/do injected?) by the invoker?
So the goal would be to (largely for experience and not for productivity) maybe, implement a data-jpa package that could be substituted in place of data-jdo. So the topmost layer (a web service, generic main method as part of a tool, unit tests, whatever) are the ones to make the choice which implementation to use.
Should I be using some framework like Spring to allow me to choose which implementation of my data-api is used, via XML?
Sorry if that's a little vague... I guess the root question is, at what point does the consumer of an API depend on, supply, or become paired with, the implementation of that API? If the answer is or should be "never" then what is used to make sure everything is available at runtime and how does the consumer get an instance of whatever the "API" is describing with only interfaces?
I come from a .net background - not a Java one, so I'm afraid I can't help you with Java specifics.
The business logic layer has to be invoked somehow, right? So is the expectation that the implementation of the data-api (data-jdo, or something else depending on experimentation) is provided (appropriate to say/do injected?) by the invoker?
Yes. In the .Net world I use a Factory (as in an instance of the Factory Pattern) that dynamically returns the data provider implementation (which one of those to use is set by config). The data provider is returned by the factory as an 'object' and it's up to the calling business logic code to cast it to the correct type - as specificed by the interface that the business logic is working against.
I'v egot (another!) article on Dependency Injection for .Net which might help explain with some of the issues, but I'm sure there are good java based ones around somewhere.
Should I be using some framework like Spring to allow me to choose which implementation of my data-api is used, via XML?
Probably. I'd say spend your time getting to grips with the concepts first, worry about "best practice" after that. FYI, I learnt AJAX the hard way - by writting all the code myself. These days I'd run straight to a good framework, but I only think I have the confidence to do that after having really grokked the basics by doing some hard graft at the coal-face :)
... If the answer is or should be "never" then what...
Yeah - it's never. Use a Factory.
Your data-api is a DAO interface layer, that's all your business (aka service) layer should know about persistence. And the presentation layer or any other layer above the business layer shouldn't have any "knowledge" of the DAO layer underneath.
To achieve that, relying on a framework like Spring is a good idea. The top level layer loads an application context which contains all the information for the framework to load the appropriate implementation.
For example, you could load applicationContext.xml from the front-end to use data-jdo, and load testApplicationContext.xml from the unit tests to use data-jpa.

Would you use AOP for database transaction management?

A while back I wrote an application which used Spring AOP for defining which methods were transactional. I am now having second thoughts as to how much of a great idea this was; I have been hit a few times after a minor refactor (changing method signatures etc), which of course doesn't become apparent until something actually goes wrong (and I have a logically inconsistent database).
So I'm interested in a few things:
Have other people decided to revert to explicit transaction management (e.g. via #Transactional annotations)?
Are there useful tools I can use as part of a build process to help identify whether anything has been "broken"?
If people are using AOP to manage transactions, what steps are they taking to avoid the mistakes I've made?
I'm using IntelliJ IDEA which allows you to browse decorated methods and will refactor Spring XML config together with method name changes, but this is not always sufficient (adding a parameter to a method in the wrong place can affect whether an aspect fires for example)
I am currently using declarative transaction management in the two Java projects I work on, specifying which methods need transactional scope with #Transactional annotation. In my opinion, it is a good mix of flexibility and robustness: you are able to see which methods have transactional behavior via a simple text search, can adjust isolation and propagation attributes by hand if needed, and the additional amount of typing is practically negligent.
On one of those projects, I have security/logging implemented via aspects and have occasionally stumbled on same obstacles you when renaming a method or changing signatures. In the worst case, I lost some logging data of user accessing contracts, and in one release, some user roles were not able to access all application features. Nothing major, but, as far as database transactions go, though, I think it's simply not worth it, and I it is better to type #Transactional bit yourself. Spring does the hard part, anyway.
Regarding (1):
I found #Transactonal a more practical solution in all projects worked on in the past few years. In some very specific cases, however, I had also to use Spring AOP to allow the use of more than one JDBC connection / TransactionManager because #Transaction is tied to a single transaction manager.
Regarding (2):
Having said that, in a mixed scenario, I do a lot of automated testing to find possibly broken code. I use Spring's AbstractTransactionalJUnit4SpringContextTests / AbstractTransactionalTestNGSpringContextTests to create my tests. It's been a very effective solution so far.
I tend to be more of a purist, but I try to keep any and all transaction management beyond a simple autocommit, inside the database itself. Most databases are excellent at handling transaction management, after all, its one of the key components of what a database is meant to do.

Categories