I am new to spring and learning Spring aop. Two advantages of AOP are :
Eliminate code scattering
To avoid code tangling
The first one makes sense to me because of duplication of same code being used in many classes and by using an aspect we can avoid duplicating OF the code in many classes instead define a point cut that will determine where the code will be implemented.
However how do we avoid code tangling in spring?I am not able to find a simple example to show how aop avoid code tangling.
Thanks.
"code tangling" means that one code fragment is responsible for more than one requirement.
AOP helps separating them.
For example you have the two requirements:
- Delete a User
- Every action that is done to an user needs to be logged
Now you can use AOP to separate the logging stuff out to an aspect and you will got two single code fragments (the delete function and the logging aspect) that are now responsible only for one requirement.
Code tangling: Modules in a software system may simultaneously interact with several requirements
For example, oftentimes developers simultaneously think about business logic, performance, synchronization, logging, and security. Such a multitude of requirements results in the simultaneous presence of elements from each concern's implementation, resulting in code tangling.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am looking to understand the pros and cons of centralizing all applications logs into separate files with AOP (e.g. with AspectJ).
Logging is know for being a cross-cutting concern. However, I have never met anyone that centralized all logs into a single or bunch of files. Therefore, I'm left wondering why.
What would be some of the pros and cons of doing it?
I am looking to understand the pros and cons of centralizing all
applications logs into separate files with AOP (e.g. with AspectJ).
I will be using the terms AOP and AspectJ interchangeably. Notwithstanding, AOP is the paradigm and AspectJ implements it, just like OOP and Java.
The benefits of centralizing cross-cutting concerns (CCC) into their own modules (e.g., aspect) using AOP are similar to those of modularization concerns with OOP. Those benefits are described in the literature in papers and books such as:
Aspect-Oriented Programming
Towards a Common Reference Architecture for Aspect-Oriented Modeling
Impact of aspect-oriented programming on software development efficiency and design quality: an empirical study.
AspectJ in Action: Enterprise AOP with Spring
and many others. One can summarize some of those benefits as follows:
reduces code duplication; Instead of having a functionality (e.g., logging) duplicated in several different places, one has it in a single aspect. That functionality can then be applied to those several places using the concept of pointcut;
reduces the coupling between domain-related and crosscutting-related concerns (i.e., separation of concerns); For instance, removing the logging from the domain code follows the single-responsibility principle;
the enhancing of code reusability; Because of the aforementioned separation of concerns, one (for instance) increases the reusability of the modules encapsulating the based code and the modules encapsulating the logging;
dealing with code tangling and scattering issues; as illustrated in the image below
Instead of having logging 1 and 2 tangled directly with the domain-code, duplicated and scattered across separate modules (i.e., Class A and Class B), we have those logging-related functionality encapsulated into one aspect.
There are in the literature some papers about the benefit of AOP regarding the modularization of cross-cutting concerns such as logging namely:
S. Canditt and M. Gunter. Aspect oriented logging in a real-world
system. In First AOSD Workshop on Aspects, Components, and Patterns
for Infrastructure Software (AOSD-2002), March 2002.
The downsides of AOP that one can read in the literature are those typical of a learning/integrating a new technology, namely:
Hiring skilled programmers
Following an adoption path to ensure that you don’t risk the project by overextending yourself
Modifying the build and other development processes
Dealing with the availability of tools
Other downsides reported by papers such as:
An exploratory study of the effect of aspect-oriented programming on maintainability
Using aspect-orientation in industrial projects: appreciated or damned?
Using and reusing aspects in aspectj.
A Case Study Implementing Features Using AspectJ (this one is particularly good a show-casing some of issues with using AOP
can be summarize to:
having to learn a new paradigm;
lack of tool support;
the performance of the weaving;
making it harder to reason about the code, since the CCC related code was moved elsewhere. The same argument can be applied to subclassing or the use of the decorated pattern for instance. However, IDEs mitigates those problems, in the case of AspectJ by showing the joinpoints that are being intercepted. Another solution is for instance to use annotations and intercept those annotations:
#Logging
public void some_method(){...}
When the programmer looks at the annotation immediately knows that that method is being intercepted by a pointcut. The downside of this approach is that you kind mixing again the CCCs with the domain code, albeit just in a form of an annotation.
On the other hand one can argue that since the domain-related concerns and CCCs are now encapsulated in different modules it is easier to reason about them in isolation.
Pointcut fragility; This problem is similar to the fragile base class problem. From source one can read:
changes to the base-code can lead to join points incorrectly falling
in or out of the scope of pointcuts.
For instance, adding new methods, or changing their signature might cause pointcuts to intercept them or stop intercepting them. There are techniques to mitigate such problems. However, in my option the solution is tooling. When one renames a method in Java one expects the IDE to safely apply those changes.
the granularity of the joinpoint module. With AspectJ (even worse with Spring AOP) it might be difficult to get local context necessary for the logging, for instance local variables, which might force you to refactor your code so that you can expose the desired joinpoints -- known in requirements engineering as scaffolding. On the other side, refactoring your code might actually improve it. From: "21. Why can't AspectJ pick out local variables (or array elements or ...)?" one can read:
Users have sometimes wanted AspectJ to pick out many more join points,
including method-local field access, array-element access, loop iteration, method parameter evaluation
Most of these have turned out not to make sense, for a variety of
reasons: it is not a commonly-understood unit for Java programmers
there are very few use-cases for advice on the join point
a seemingly-insignificant change to the underlying program causes a
change in the join point
pointcuts can't really distinguish the join point in question
the join point would differ too much for different implementations of
AspectJ, or would only be implementable in one way
We prefer to be very conservative in the join point model for the
language, so a new join point would have to be useful, sensible, and
implementable. The most promising of the new join points proposed are
for exception throws clauses and for synchronized blocks.
You need to evaluate if in your context it pay-off adding an extra layer (i.e., transversal modularization) into your code base; and also evaluate against alternative approaches such as code-generators, frameworks or design patterns.
In the case of the logging some of the aforementioned issues are not as problematic because if things go wrong... well you just lose or add some logging.
Opinion base part
Logging is know for being a cross-cutting concern. However, I have never met anyone that centralized all logs into a single or bunch of files. Therefore, I'm left wondering why.
Is separation of concerns a bad practice?! (unless you take it to the extreme no). I would definitely say that is not a bad practice, is it worth?! well that is more complex and context depended.
I personally hate to see a beautiful snippet of code mixed with logging functionality. IMO given that software has to deal with so many other variables than just the code per si, such as tight deadlines, not having logging mixed with base code seems to not have a higher priority.
I'm currently working on a project that i need to use Aspectj in it. In the documentation , for every aspect i wrote, i need to explain what were the reasons for using this aspect and not just write the code in the main program.
In generally, i only think about reasons like code-reusing, or flexibility (meaning the program can deal without this aspect, but the aspect will make the program more effective, like check things that maybe do some trouble in the future for example), but i think it is not enough.
While searching for more reasons, i saw that many programmers wrote "cross cutting" - what is the meaning of this and why its so important reason?
EDIT:
This question was asked during my school days, when aspects were something not so common in the projects. Now, 3 years after that, and a lot of backend programming in Java (Spring) I can answer to myself with simple example: the Repository aspect - This annotation(#repository) is used on Java classes which directly access the database.For example, when an exception occurs in the class there is a handler for that exception and there is no need to add a try catch block. It doesn't restrict to particular Class, doesn't care about the domain logic, it's to all of the Classes that want to interact with databases - this is a cross cutting problem.
Cross cutting, in my eyes, is mainly to have separation of concerns, i.e. you can separate the code that handles e.g. technical things (like e.g. logging, authorization, transactions etc.) from code that handles field of domain things.
But why and when is this useful?
It is especially useful if your development organisation also separates theses things, i.e. if some developers are responsible for field of domain programming and others for technical layers.
So different persons can write their code in different places, not disturbing each other.
If the same persons write the different aspects it may still be useful to have this separation of concerns, but it is not that urgent in this case, at least not so urgent, that you want to mess with AspectJ, which introduces some additional complexity.
I am trying to have an annotation #FeatureDependent be used on methods to signal that the method requires certain things to be enabled in order for it to work. And I was wondering if it was possible to have a method called everytime a method with #FeatureDependent was called which would check if the criteria were met for the method to be called.
It sounds like you are describing Aspect Oriented Programming (AOI). This technique allows you to address "cross-cutting" concerns, tasks like logging, security, and transaction management which tend to affect many methods in the same manner. Your use case sounds like it would be a good fit for AOP.
There are two common approaches to AOP. The first mechanism is to create objects in a container (e.g. a Spring container). The container can then scan the class, detect any advice that needs to be applied, and apply the advice via dynamic proxies (Googling Spring and AOP is a good place to start with this). The downside is that your components will need to be constructed by a container so it makes sense for larger components.
The second approach is an extra compilation step (sometimes done at compilation, sometimes done as a separate compilation phase, and sometimes done by a weaving class loader) to wire in the additional methods. This is typically called "weaving" and AspectJ is a common library to look into for this.
Both approaches will allow you to apply "advice" (code run before and after a method invocation) based on annotations on an object. Explaining either in more detail would be beyond the scope of a SO answer but I hope it can get you started.
I should warn that AOP has gotten a bit of a reputation for complicating the flow of an application and tending to be difficult to understand and debug. As a result it has declined in popularity lately.
Another approach is to use something like Servlet Filters, basically a single choke point for all requests and workflows where you can apply various logging & security mechanisms. Such an approach tends to be a little easier to understand and involve a bit less "black magic".
Starting a new GWT application and wondering if I can get some advice from someones experience.
I have a need for a lot of server-side functionality through RPC services...but I am wondering where to draw the line.
I can make a service for every little call or I can make fewer services which handle more operations.
Let's say I have Customer, Vendor and Administration services. I could make 3 services or a service for each function in each category.
I noticed that much of the service implementation does not provide compile-time help and at times troublesome to get going, but it provides good modularity. When I have a larger service, I don't have the modularity as I described, but I don't have to the service creation issues and reduce the entries in my web.xml file.
Is there a resource issue with using a lot of services? What is the best practice to determine what level of granularity to use?
in my opinion, you should make a rpc service for "logical" things.
in your example:
one for customer, another for vendors and a third one for admin
in that way, you keep several services grouped by meaning, and you will have a few lines to maintain in the web.xml file ( and this is a good news :-)
More seriously, rpc services are usually wrappers to call database stuff, so, you even could make a single 'MagicBlackBoxRpc' with a single web.xml entry and thousands of operations !
but making a separate rpc for admin operations, like you suggest, seems a good thing.
Read general advice on "how big should a class be?", which is available in any decent software engineering book.
In my opinion:
One class = One Subject (ie. group of functions or behaviours that are related)
A class should not deal with more than one subject. For example:
Class PersonDao -> Subject: interface between the DB and Java code.
It WILL NOT:
- cache Person instances
- update fields automatically (for example, update the field 'lastModified')
- find the database
Why?
Because for all these other things, there will be other classes doing it! Respectively:
- a cache around the PersonDao is concerned with the efficient storage of information to avoid hitting the DB more often than necessary
- the Service class which uses the DAO is responsible for modifying anything that needs to be modified automagically.
- To find the database is responsibility of the DataSource (usually part of a framework like Spring) and your Dao should NOT be worried about that. It's not part of its subject.
TDD is the answer
The need for this kind of separation becomes really clear when you do TDD (Test-Driven Development). Try to do TDD on bad code where a single class does all sorts of things! You can't even get started with one unit test! So this is my final hint: use TDD and that will tell you how big a class should be.
I think the thing to optimize for is that you can accomplish a result in one round trip to the server. I have an ad-hoc collection of methods on my service object, one for each situation the client finds itself in when it has to get something done. You do not want the client to RPC to the server several times in a row while the user is sitting there waiting.
REST makes things orthogonal, but orthogonality has a cost: there is a reason that the frequently used verbs in languages are irregular. In terms of maintaing clean orthogonal structure to your app, make sure your schema is well-designed. That is where each class should have semantics orthogonal to that of the other classes. When the semantics of each RPC call can be stated cleanly in the schema there will be no confusion as to what they mean, even if they aren't REST-fully ideal.
As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.