Saving entities in private methods - java

I have an Ingestion class that exposes a single method ingest. This method processes each section of a passed in form (section 1, section 2, etc, etc).
I have private methods for each section, saving the entity as it processes through. I'm aware that #Transactional has no effect on private methods, however I do not want to expose these methods but would like to use the functionality that #Transactional provides.
I'm looking to make sure each section completes in its own Transaction; I could do this through 'AspectJ' (as other SO answers have suggested) instead of Spring's out the box implementation, but I am trying to avoid due to the system wide changes it would cause.
Any thoughts on another approach?
The pseudo code provided below gives a general idea on the structure of the class:
public Class Ingestion {
// Autowired Repo's
...
...
#Transactional
public void ingest(Form form){
this.processSection1(form);
this.processSection2(form);
this.processSection3(form);
}
#Transactional
private void processSection1(Form form){
// do specific section 1 logic
section1Repo.save(form);
}
#Transactional
private void processSection2(Form form){
// do specific section2 logic
section2Repo.save(form);
}
#Transactional
private void processSection3(Form form){
// do specific section3 logic
section3Repo.save(form);
}
}
=========================================================================
This is not a duplicate question as marked in the comments. I know #Transactional doesnt work on private methods. My question is more along the lines of 'how do we get around this Spring AOP issue without having to use AspectJ'

The reason this doesn't work is that annotations like #Transactional add additional functionality that is intercepted by Spring's proxy object that wraps the actual object. But when you call a private method on an object with the this keyword, you're going straight to the real object and bypassing the proxy.
One way to solve this is to #Autowire the object into itself, and make the transactional calls via that autowired variable. You can still access private methods that way, and the call will be to a Spring-managed proxy instead of the bare object.

You may extract these three processing methods in another class, make them public, but set the class constructor access level to package-local (but not private, since Spring can't proxy classes with private constructors), so no classes from other packages could access these methods just because they are not able to instantiate their class. It doesn't hide these methods completely, but may fit your needs. This trick can be done with an inner class as well (note that it must be declared with package-local access).
To completely hide these methods, you may make use of declarative transaction management by injecting TransactionTemplate bean and using its execute method in private methods. This feature comes out-of-the-box. See more here.
Also, take note that for creating new transaction on executing method B from method A, method B must be declared #Transactional with propagation type REQUIRES_NEW. Otherwise, any nested methods will be invoked in the same transaction started by initial calling method.

Related

Spring cache applying caching on a generic method by dynamically getting the cachname based on the return type

The method getById is located in an Abstract class named AbstractCachedResult.
public abstract class AbstractCachedResult<T extends BaseResource> {
#Cacheable(value = "dynamicName")
public T getById(String id){
//program logic
}
}
Many other service classes will be inheriting from this class. For ex :
public class UserService extends AbstractCachedResult<User> {
//program logic
}
Since I am setting the #Cacheable(value = "dynamicName") in the abstract class I wouldn't be able to know the type of the class that was returned by the method. I need to be able to get the name of the class dynamically so that for each method invocation from its inherited class the correct cache is used.
I have came across another post. Here they are passing the entity class name as a parameter which I cannot do. I need to get the type of the returned data dynamically so that I could us the #Caching annotation which is the #2 soluton. Is there a way to do this?
Arguably, the simplest and most obvious solution, especially for maintainers, would be to override the getById(:String) method in the subclass, like so:
public class UserService extends AbstractCachedResult<User> {
#Cacheable("UserCache")
public User getById(String id) {
super.id(id);
}
// additional logic
}
Alternatively, you might be able to implement a "custom" CacheResolver (see doc and Javadoc) that is able to inspect the class generic signature of the (target) subclass.
Finally, under-the-hood, Spring's Cache Abstraction is implemented with Spring AOP. For real, low-level control, it should be possible to write custom AOP Interceptor not unlike Spring's default CacheInterceptor (Javadoc). Your own AOP Advice could even be ordered relative to the Cacheable Advice, if needed.
Honestly, I think the first option I presented above is the best approach. Not every service class may need to cache the result of the getById(:String) operation. Caching really depends on the transactional nature of the data and how frequently the data changes.
Use your best judgement.

Spring #Transactional annotation is not working in the provider class which is a subclass of AbstractUserDetailsAuthenticationProvider [duplicate]

I want to know what actually happens when you annotate a method with #Transactional?
Of course, I know that Spring will wrap that method in a Transaction.
But, I have the following doubts:
I heard that Spring creates a proxy class? Can someone explain this in more depth. What actually resides in that proxy class? What happens to the actual class? And how can I see Spring's created proxied class
I also read in Spring docs that:
Note: Since this mechanism is based on proxies, only 'external' method calls coming in through the proxy will be intercepted. This means that 'self-invocation', i.e. a method within the target object calling some other method of the target object, won't lead to an actual transaction at runtime even if the invoked method is marked with #Transactional!
Source: http://static.springsource.org/spring/docs/2.0.x/reference/transaction.html
Why only external method calls will be under Transaction and not the self-invocation methods?
This is a big topic. The Spring reference doc devotes multiple chapters to it. I recommend reading the ones on Aspect-Oriented Programming and Transactions, as Spring's declarative transaction support uses AOP at its foundation.
But at a very high level, Spring creates proxies for classes that declare #Transactional on the class itself or on members. The proxy is mostly invisible at runtime. It provides a way for Spring to inject behaviors before, after, or around method calls into the object being proxied. Transaction management is just one example of the behaviors that can be hooked in. Security checks are another. And you can provide your own, too, for things like logging. So when you annotate a method with #Transactional, Spring dynamically creates a proxy that implements the same interface(s) as the class you're annotating. And when clients make calls into your object, the calls are intercepted and the behaviors injected via the proxy mechanism.
Transactions in EJB work similarly, by the way.
As you observed, through, the proxy mechanism only works when calls come in from some external object. When you make an internal call within the object, you're really making a call through the this reference, which bypasses the proxy. There are ways of working around that problem, however. I explain one approach in this forum post in which I use a BeanFactoryPostProcessor to inject an instance of the proxy into "self-referencing" classes at runtime. I save this reference to a member variable called me. Then if I need to make internal calls that require a change in the transaction status of the thread, I direct the call through the proxy (e.g. me.someMethod().) The forum post explains in more detail.
Note that the BeanFactoryPostProcessor code would be a little different now, as it was written back in the Spring 1.x timeframe. But hopefully it gives you an idea. I have an updated version that I could probably make available.
When Spring loads your bean definitions, and has been configured to look for #Transactional annotations, it will create these proxy objects around your actual bean. These proxy objects are instances of classes that are auto-generated at runtime. The default behaviour of these proxy objects when a method is invoked is just to invoke the same method on the "target" bean (i.e. your bean).
However, the proxies can also be supplied with interceptors, and when present these interceptors will be invoked by the proxy before it invokes your target bean's method. For target beans annotated with #Transactional, Spring will create a TransactionInterceptor, and pass it to the generated proxy object. So when you call the method from client code, you're calling the method on the proxy object, which first invokes the TransactionInterceptor (which begins a transaction), which in turn invokes the method on your target bean. When the invocation finishes, the TransactionInterceptor commits/rolls back the transaction. It's transparent to the client code.
As for the "external method" thing, if your bean invokes one of its own methods, then it will not be doing so via the proxy. Remember, Spring wraps your bean in the proxy, your bean has no knowledge of it. Only calls from "outside" your bean go through the proxy.
Does that help?
As a visual person, I like to weigh in with a sequence diagram of the proxy pattern. If you don't know how to read the arrows, I read the first one like this: Client executes Proxy.method().
The client calls a method on the target from his perspective, and is silently intercepted by the proxy
If a before aspect is defined, the proxy will execute it
Then, the actual method (target) is executed
After-returning and after-throwing are optional aspects that are
executed after the method returns and/or if the method throws an
exception
After that, the proxy executes the after aspect (if defined)
Finally the proxy returns to the calling client
(I was allowed to post the photo on condition that I mentioned its origins. Author: Noel Vaes, website: https://www.noelvaes.eu)
The simplest answer is:
On whichever method you declare #Transactional the boundary of transaction starts and boundary ends when method completes.
If you are using JPA call then all commits are with in this transaction boundary.
Lets say you are saving entity1, entity2 and entity3. Now while saving entity3 an exception occur, then as enitiy1 and entity2 comes in same transaction so entity1 and entity2 will be rollback with entity3.
Transaction :
entity1.save
entity2.save
entity3.save
Any exception will result in rollback of all JPA transactions with DB.Internally JPA transaction are used by Spring.
All existing answers are correct, but I feel cannot give this complex topic justice.
For a comprehensive, practical explanation you might want to have a look at this Spring #Transactional In-Depth guide, which tries its best to cover transaction management in ~4000 simple words, with a lot of code examples.
It may be late but I came across something which explains your concern related to proxy (only 'external' method calls coming in through the proxy will be intercepted) nicely.
For example, you have a class that looks like this
#Component("mySubordinate")
public class CoreBusinessSubordinate {
public void doSomethingBig() {
System.out.println("I did something small");
}
public void doSomethingSmall(int x){
System.out.println("I also do something small but with an int");
}
}
and you have an aspect, that looks like this:
#Component
#Aspect
public class CrossCuttingConcern {
#Before("execution(* com.intertech.CoreBusinessSubordinate.*(..))")
public void doCrossCutStuff(){
System.out.println("Doing the cross cutting concern now");
}
}
When you execute it like this:
#Service
public class CoreBusinessKickOff {
#Autowired
CoreBusinessSubordinate subordinate;
// getter/setters
public void kickOff() {
System.out.println("I do something big");
subordinate.doSomethingBig();
subordinate.doSomethingSmall(4);
}
}
Results of calling kickOff above given code above.
I do something big
Doing the cross cutting concern now
I did something small
Doing the cross cutting concern now
I also do something small but with an int
but when you change your code to
#Component("mySubordinate")
public class CoreBusinessSubordinate {
public void doSomethingBig() {
System.out.println("I did something small");
doSomethingSmall(4);
}
public void doSomethingSmall(int x){
System.out.println("I also do something small but with an int");
}
}
public void kickOff() {
System.out.println("I do something big");
subordinate.doSomethingBig();
//subordinate.doSomethingSmall(4);
}
You see, the method internally calls another method so it won't be intercepted and the output would look like this:
I do something big
Doing the cross cutting concern now
I did something small
I also do something small but with an int
You can by-pass this by doing that
public void doSomethingBig() {
System.out.println("I did something small");
//doSomethingSmall(4);
((CoreBusinessSubordinate) AopContext.currentProxy()).doSomethingSmall(4);
}
Code snippets taken from:
https://www.intertech.com/Blog/secrets-of-the-spring-aop-proxy/
The page doesn't exist anymore.

Configuring state of mocked object for different scenarios

I've a DAO and a service class. In the service class CapService, I've #Autowired a reference of DAO class CapDAO. The CapDAO class has a private instance field of type int, that has it's value injected from a properties file using #Value annotation.
class CapDAO {
#Value("${someProperty}")
private int expiryTime;
}
class CapService {
#Autowired
private CapDAO capDAO;
}
There is a method - retrieveCap() in the CapDAO class, which retrieves the caps from the database, based on the expiryTime. That method is invoked from another method in CapService class.
The CapService class uses the list returned from DAO method to create another object wrapping that list. And finally it returns that data structure.
Now, I'm testing a scenario using Mockito framework. I've two scenarios. In both of them, I want to invoke method of CapService class, which will get me the object. The list retrieved form database, will depend upon the value of expiryTime in the CapDAO class. And so will the content of the object returned by CapService class method.
In test, I'm invoking the method in Service class, and checking the value returned. Since DAO is reading expiryTime from properties file, both the test scenarios cannot pass with the same configured value. I've to have two differently configured DAO instance to be injected into Service class.
So, my question is - is there any way I can configure the expiryTime in CapDAO class, to create two different instance, or may be in a single instance only, and inject those in CapService based on scenario? No I don't have any setter for expiryTime. Yes, I knwo I can use reflection, but I would like to keep that as my last resort.
Short answer
reflection is easiest possibility, you can simply use ReflectionTestUtil. Note: If you have an interface which CapDAO implements, you need also AopUtils
Long answer
If you don't wanna use reflection, you need separate your context and test to get this work:
// context1.xml
<context:property-placeholder location="classpath:test1.properties"/>
// context2.xml
<context:property-placeholder location="classpath:test2.properties"/>
Then you can define someProperty with some other value in the properties.
personally i will recommend reflection.

Why use constructor over setter injection in CDI?

I couldn't find any reasonable answer here on SO so I hope it's not a duplicate. So why should I prefer setter or constructor injection over simple
#Inject
MyBean bean;
I get the usage of the constructor injection if you need to do something with injected bean during your class initialization like
public void MyBean(#Inject OtherBean bean) {
doSomeInit(bean);
//I don't need to use #PostConstruct now
}
but still, it's almost the same like #PostConstruct method and I don't get setter injection at all, isn't it just a relic after Spring and other DI frameworks?
Constructor and property injection gives you the option to initialize the object even in a non CDI environment easily, e.g a unit test.
In a non-CDI environment you can still simply use the object by just passing the constructor arg.
OtherBean b = ....;
new MyBean(b);
If you just use field injection you usually must use reflection to access the field, because fields are usually private.
If you use property injection you can also write code in the setter. E.g. validation code or you clear internal caches that hold values which are derived from the property that the setter modifies. What you want to do depends on your implementation needs.
Setter vs constructor injection
In object-oriented programming an object must be in a valid state after construction and every method invocation changes the state to another valid state.
For setter injection this means that you might require a more complex state handling, because an object should be in a valid state after construction, even if the setter has not been invoked yet. Thus the object must be in a valid state even if the property is not set. E.g. by using a default value or a null object.
If you have a dependency between the object's existence and the property, the property should either be a constructor argument. This will also make the code more clean, because if you use a constructor parameter you document that the dependency is necessary.
So instead of writing a class like this
public class CustomerDaoImpl implements CustomerDao {
private DataSource dataSource;
public Customer findById(String id){
checkDataSource();
Connection con = dataSource.getConnection();
...
return customer;
}
private void checkDataSource(){
if(this.dataSource == null){
throw new IllegalStateException("dataSource is not set");
}
}
public void setDataSource(DataSource dataSource){
this.dataSource = dataSource;
}
}
you should either use constructor injection
public class CustomerDaoImpl implements CustomerDao {
private DataSource dataSource;
public CustomerDaoImpl(DataSource dataSource){
if(dataSource == null){
throw new IllegalArgumentException("Parameter dataSource must not be null");
}
this.dataSource = dataSource;
}
public Customer findById(String id) {
Customer customer = null;
// We can be sure that the dataSource is not null
Connection con = dataSource.getConnection();
...
return customer;
}
}
My conclusion
Use properties for every optional dependency.
Use constructor args for every mandatory dependency.
PS: My blog The difference between pojos and java beans explains my conclusion in more detail.
EDIT
Spring also suggests to use constructor injection as I found in the spring documentation, section Setter-based Dependency Injection.
The Spring team generally advocates constructor injection, as it lets you implement application components as immutable objects and ensures that required dependencies are not null. Furthermore, constructor-injected components are always returned to the client (calling) code in a fully initialized state. As a side note, a large number of constructor arguments is a bad code smell, implying that the class likely has too many responsibilities and should be refactored to better address proper separation of concerns.
Setter injection should primarily only be used for optional dependencies that can be assigned reasonable default values within the class. Otherwise, not-null checks must be performed everywhere the code uses the dependency. One benefit of setter injection is that setter methods make objects of that class amenable to reconfiguration or re-injection later. Management through JMX MBeans is therefore a compelling use case for setter injection.
Constructor injection is also a better way when you think about unit tests, because it is easier to call the constructor instead of setting private (#Autowired) fields.
When using CDI, there is no reason whatsoever to use constructor or setter injection. As noted in the question, you add a #PostConstruct method for what would otherwise be done in a constructor.
Others may say that you need to use Reflection to inject fields in unit tests, but that is not the case; mocking libraries and other testing tools do that for you.
Finally, constructor injection allows fields to be final, but this isn't really a disadvantage of #Inject-annotated fields (which can't be final). The presence of the annotation, combined with the absence of any code explicitly setting the field, should make it clear it is to be set by the container (or testing tool) only. In practice, no one will be re-assigning an injected field.
Constructor and setter injection made sense in the past, when developers usually had to manually instantiate and inject dependencies into a tested object. Nowadays, technology has evolved and field injection is a much better option.
Accepted answer is great, however it doesn't give credit to the main advantage of constructor injection - class immutability, which helps to achieve thread-safety, state safety, and better readability on the classes.
Consider you have class with dependencies and all of those dependencies are provided as constructor arguments, then you can know that the object will never exist in a state where dependencies are invalid. There is no need for setters for those dependencies (as long as they are private), so the object is instantiated to a full state or is not instantiated at all.
An immutable object is much more likely to well behave in an multithreaded application. Although the class still needs to be made internally thread-safe, you don't have to worry about external clients coordinating access to the object.
Of course this can be usefull only in certain scenarios. Setter injection is great for partial depencdency, where for example we have 3 properties in a class and 3 arg constructor and setters methods. In such case, if you want to pass information for only one property, it is possible by setter method only. Very useful for testing purposes.

How to control object's lifecycle with Guice

I have Guice-injected objects which have two lifecycle methods bind() and unbind(). The method bind() is called automatically after the object is instantiated by Guice using following annotated method:
#Inject
final void autoBind() {
bind();
}
What I want to do is to call the method unbind() on the old (current) object before a new instance of the object is created by Guice. How do I do that?
Thanks in advance!
First of all, I would not advise that you just annotate arbitrary methods with #Inject. Keep it to constructors, and occasionally for optional injection and/or field injection.
What you are trying to do does sound a bit weird, and I'm not sure it's exactly what you want. Can you please provide more background on what you're trying to do because maybe a different approach is better. There are definitely some concerns here with thread safety and how you manage references.
Based on what you described, an approach like what #meverett mentioned would probably work. If the objects you have are Foos, it would look something like this.
// Later on be sure to bind(Foo.class).toProvider(FooProvider.class);
final class FooProvider implements Provider<Foo> {
private final Provider<Foo> unboundFooProvider;
private Foo currentInstance;
#Inject FooProvider(#Unbound Provider<Foo> unboundFooProvider) {
this.unboundFooProvider = unboundFooProvider;
}
#Override public Foo get() {
if (currentInstance != null) {
currentInstance.unbind();
}
currentInstance = unboundFooProvider.get();
currentInstance.bind();
return currentInstance;
}
}
NOTE that your #Unbound Foo provider would generate Foos without invoking any special methods. The regular FooProvider keeps track of state and deciding when to bind() and unbind() the instances. Please be careful with how you manage multiple instances and use them with multiple threads.
Also, just to be clear: I'm using #Unbound since the methods you want to invoke are called bind() and unbind(). I'm not using "bound" in the Guice sense.
Also note... off the top of my head I'm pretty sure Providers are treated as singletons, so maintaining state like this will work. If it didn't, you could obviously just create a level of indirection with some kind of singleton factory (but that shouldn't be necessary).
Previous responses address other concerns nicely, so to just answer the question:
Netflix introduced governator in github in 2012 to "enhance Google Guice to provide ... lifecycle management". It provides annotations (#PreConfiguration, #PostConstruct, #PreDestroy, and others), classpath scanning & auto binding, and other features. Bootstrapping is straight forward.
I suppose you could have a provider that keeps a reference to the current object. When you call get on the provider it would unbind the last object, construct the new one and save the reference to it.
though I'm not really sure why you would want to do something like this since other objects can in theory still be referencing it

Categories