The Spring integration with Feign supports using Spring MVC annotations for mapping a Feign interface:
#FeignClient("multiplier")
public interface MultiplierApi {
#GetMapping("/multiply")
public Long multiply(#RequestParam("one") long one, #RequestParam("two") long two);
}
I could place the MultiplierApi interface into an API package, and use it with #EnableFeignClients in client programs and as an implemented interface for my controller:
#RestController
public class MultiplierController implements MultiplierApi {
public Long multiply(long one, long two) {
return one * two;
}
}
This seems to allow me to remove duplication that might otherwise occur between the controller and the client interface, reducing the likelihood that the mappings will get out of sync. Is there any disadvantage to sharing the API definition in this way?
Related
I have recently noticed that Spring successfully intercepts intra class function calls in a #Configuration class but not in a regular bean.
A call like this
#Repository
public class CustomerDAO {
#Transactional(value=TxType.REQUIRED)
public void saveCustomer() {
// some DB stuff here...
saveCustomer2();
}
#Transactional(value=TxType.REQUIRES_NEW)
public void saveCustomer2() {
// more DB stuff here
}
}
fails to start a new transaction because while the code of saveCustomer() executes in the CustomerDAO proxy, the code of saveCustomer2() gets executed in the unwrapped CustomerDAO class, as I can see by looking at 'this' in the debugger, and so Spring has no chance to intercept the call to saveCustomer2.
However, in the following example, when transactionManager() calls createDataSource() it is correctly intercepted and calls createDataSource() of the proxy, not of the unwrapped class, as evidenced by looking at 'this' in the debugger.
#Configuration
public class PersistenceJPAConfig {
#Bean
public DriverManagerDataSource createDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
//dataSource.set ... DB stuff here
return dataSource;
}
#Bean
public PlatformTransactionManager transactionManager( ){
DataSourceTransactionManager transactionManager = new DataSourceTransactionManager(createDataSource());
return transactionManager;
}
}
So my question is, why can Spring correctly intercept the intra class function calls in the second example, but not in the first. Is it using different types of dynamic proxies?
Edit:
From the answers here and other sources I now understand the following:
#Transactional is implemented using Spring AOP, where the proxy pattern is carried out by wrapping/composition of the user class. The AOP proxy is generic enough so that many Aspects can be chained together, and may be a CGLib proxy or a Java Dynamic Proxy.
In the #Configuration class, Spring also uses CGLib to create an enhanced class which inherits from the user #Configuration class, and overrides the user's #Bean functions with ones that do some extra work before calling the user's/super function such as check if this is the first invocation of the function or not. Is this class a proxy? It depends on the definition. You may say that it is a proxy which uses inheritance from the real object instead of wrapping it using composition.
To sum up, from the answers given here I understand these are two entirely different mechanisms. Why these design choices were made is another, open question.
Is it using different types of dynamic proxies?
Almost exactly
Let's figure out what's the difference between #Configuration classes and AOP proxies answering the following questions:
Why self-invoked #Transactional method has no transactional semantics even though Spring is capable of intercepting self-invoked methods?
How #Configuration and AOP are related?
Why self-invoked #Transactional method has no transactional semantics?
Short answer:
This is how AOP made.
Long answer:
Declarative transaction management relies on AOP (for the majority of Spring applications on Spring AOP)
The Spring Framework’s declarative transaction management is made possible with Spring aspect-oriented programming (AOP)
It is proxy-based (§5.8.1. Understanding AOP Proxies)
Spring AOP is proxy-based.
From the same paragraph SimplePojo.java:
public class SimplePojo implements Pojo {
public void foo() {
// this next method invocation is a direct call on the 'this' reference
this.bar();
}
public void bar() {
// some logic...
}
}
And a snippet proxying it:
public class Main {
public static void main(String[] args) {
ProxyFactory factory = new ProxyFactory(new SimplePojo());
factory.addInterface(Pojo.class);
factory.addAdvice(new RetryAdvice());
Pojo pojo = (Pojo) factory.getProxy();
// this is a method call on the proxy!
pojo.foo();
}
}
The key thing to understand here is that the client code inside the main(..) method of the Main class has a reference to the proxy.
This means that method calls on that object reference are calls on the proxy.
As a result, the proxy can delegate to all of the interceptors (advice) that are relevant to that particular method call.
However, once the call has finally reached the target object (the SimplePojo, reference in this case), any method calls that it may make on itself, such as this.bar() or this.foo(), are going to be invoked against the this reference, and not the proxy.
This has important implications. It means that self-invocation is not going to result in the advice associated with a method invocation getting a chance to execute.
(Key parts are emphasized.)
You may think that aop works as follows:
Imagine we have a Foo class which we want to proxy:
Foo.java:
public class Foo {
public int getInt() {
return 42;
}
}
There is nothing special. Just getInt method returning 42
An interceptor:
Interceptor.java:
public interface Interceptor {
Object invoke(InterceptingFoo interceptingFoo);
}
LogInterceptor.java (for demonstration):
public class LogInterceptor implements Interceptor {
#Override
public Object invoke(InterceptingFoo interceptingFoo) {
System.out.println("log. before");
try {
return interceptingFoo.getInt();
} finally {
System.out.println("log. after");
}
}
}
InvokeTargetInterceptor.java:
public class InvokeTargetInterceptor implements Interceptor {
#Override
public Object invoke(InterceptingFoo interceptingFoo) {
try {
System.out.println("Invoking target");
Object targetRetVal = interceptingFoo.method.invoke(interceptingFoo.target);
System.out.println("Target returned " + targetRetVal);
return targetRetVal;
} catch (Throwable t) {
throw new RuntimeException(t);
} finally {
System.out.println("Invoked target");
}
}
}
Finally InterceptingFoo.java:
public class InterceptingFoo extends Foo {
public Foo target;
public List<Interceptor> interceptors = new ArrayList<>();
public int index = 0;
public Method method;
#Override
public int getInt() {
try {
Interceptor interceptor = interceptors.get(index++);
return (Integer) interceptor.invoke(this);
} finally {
index--;
}
}
}
Wiring everything together:
public static void main(String[] args) throws Throwable {
Foo target = new Foo();
InterceptingFoo interceptingFoo = new InterceptingFoo();
interceptingFoo.method = Foo.class.getDeclaredMethod("getInt");
interceptingFoo.target = target;
interceptingFoo.interceptors.add(new LogInterceptor());
interceptingFoo.interceptors.add(new InvokeTargetInterceptor());
interceptingFoo.getInt();
interceptingFoo.getInt();
}
Will print:
log. before
Invoking target
Target returned 42
Invoked target
log. after
log. before
Invoking target
Target returned 42
Invoked target
log. after
Now let's take a look at ReflectiveMethodInvocation.
Here is a part of its proceed method:
Object interceptorOrInterceptionAdvice = this.interceptorsAndDynamicMethodMatchers.get(++this.currentInterceptorIndex);
++this.currentInterceptorIndex should look familiar now
Here is the target
And there are interceptors
the method
the index
You may try introducing several aspects into your application and see the stack growing at the proceed method when advised method is invoked
Finally everything ends up at MethodProxy.
From its invoke method javadoc:
Invoke the original method, on a different object of the same type.
And as I mentioned previously documentation:
once the call has finally reached the target object any method calls that it may make on itself are going to be invoked against the this reference, and not the proxy
I hope now, more or less, it's clear why.
How #Configuration and AOP are related?
The answer is they are not related.
So Spring here is free to do whatever it wants. Here it is not tied to the proxy AOP semantics.
It enhances such classes using ConfigurationClassEnhancer.
Take a look at:
CALLBACKS
BeanMethodInterceptor
BeanFactoryAwareMethodInterceptor
Returning to the question
If Spring can successfully intercept intra class function calls in a #Configuration class, why does it not support it in a regular bean?
I hope from technical point of view it is clear why.
Now my thoughts from non-technical side:
I think it is not done because Spring AOP is here long enough...
Since Spring Framework 5 the Spring WebFlux framework has been introduced.
Currently Spring Team is working hard towards enhancing reactive programming model
See some notable recent blog posts:
Reactive Transactions with Spring
Spring Data R2DBC 1.0 M2 and Spring Boot starter released
Going Reactive with Spring, Coroutines and Kotlin Flow
More and more features towards less-proxying approach of building Spring applications are introduced. (see this commit for example)
So I think that even though it might be possible to do what you've described it is far from Spring Team's #1 priority for now
Because AOP proxies and #Configuration class serve a different purpose, and are implemented in a significantly different ways (even though both involve using proxies).
Basically, AOP uses composition while #Configuration uses inheritance.
AOP proxies
The way these work is basically that they create proxies that do the relevant advice logic before/after delegating the call to the original (proxied) object. The container registers this proxy instead of the proxied object itself, so all dependencies are set to this proxy and all calls from one bean to another go through this proxy. However, the proxied object itself has no pointer to the proxy (it doesn't know it's proxied, only the proxy has a pointer to the target object). So any calls within that object to other methods don't go through the proxy.
(I'm only adding this here for contrast with #Configuration, since you seem to have correct understanding of this part.)
#Configuration
Now while the objects that you usually apply the AOP proxy to are a standard part of your application, the #Configuration class is different - for one, you probably never intend to create any instances of that class directly yourself. This class truly is just a way to write configuration of the bean container, has no meaning outside Spring and you know that it will be used by Spring in a special way and that it has some special semantics outside of just plain Java code - e.g. that #Bean-annotated methods actually define Spring beans.
Because of this, Spring can do much more radical things to this class without worrying that it will break something in your code (remember, you know that you only provide this class for Spring, and you aren't going to ever create or use its instance directly).
What it actually does is it creates a proxy that's subclass of the #Configuration class. This way, it can intercept invocation of every (non-final non-private) method of the #Configuration class, even within the same object (because the methods are effectively all overriden by the proxy, and Java has all the methods virtual). The proxy does exactly this to redirect any method calls that it recognizes to be (semantically) references to Spring beans to the actual bean instances instead of invoking the superclass method.
read a bit spring source code. I try to answer it.
the point is how spring deal with the #Configurationand #bean.
in the ConfigurationClassPostProcessor which is a BeanFactoryPostProcessor, it will enhance all ConfigurationClasses and creat a Enhancer as a subClass.
this Enhancer register two CALLBACKS(BeanMethodInterceptor,BeanFactoryAwareMethodInterceptor).
you call PersistenceJPAConfig method will go through the CALLBACKS. in BeanMethodInterceptor,it will get bean from spring container.
it may be not clearly. you can see the source code in ConfigurationClassEnhancer.java BeanMethodInterceptor.ConfigurationClassPostProcessor.java enhanceConfigurationClasses
You can't call #Transactional method in same class
It's a limitation of Spring AOP (dynamic objects and cglib).
If you configure Spring to use AspectJ to handle the transactions, your code will work.
The simple and probably best alternative is to refactor your code. For example one class that handles users and one that process each user. Then default transaction handling with Spring AOP will work.
Also #Transactional should be on Service layer and not on #Repository
transactions belong on the Service layer. It's the one that knows about units of work and use cases. It's the right answer if you have several DAOs injected into a Service that need to work together in a single transaction.
So you need to rethink your transaction approach, so your methods can be reuse in a flow including several other DAO operations that are roll-able
Spring uses proxying for method invocation and when you use this... it bypasses that proxy. For #Bean annotations Spring uses reflection to find them.
I have been using dependency injection using #Autowired in Spring boot. From all the articles that I have read about dependency injection, they mention that dependency injection is very useful when we (if) decide to change the implementing class in the future.
For example, let us deal with a Car class and a Wheel interface. The Car class requires an implementation of the Wheel interface for it to work. So, we go ahead and use dependency injection in this scenario
// Wheel interface
public interface Wheel{
public int wheelCount();
public void wheelName();
...
}
// Wheel interface implementation
public class MRF impements Wheel{
#Override
public int wheelCount(){
......
}...
}
// Car class
public class Car {
#Autowired
Wheel wheel;
}
Now in the above scenario, ApplicationContext will figure out that there is an implementation of the Wheel interface and thus bind it to the Car class. In the future, if we change the implementation to say, XYZWheel implementing class and remove the MRF implementation, then the same should work.
However, if we decide to keep both the implementations of Wheel interface in our application, then we will need to specifically mention the dependency we are interested in while Autowiring it. So, the changes would be as follows -
// Wheel interface
public interface Wheel{
public int wheelCount();
public void wheelName();
...
}
#Qualifier("MRF")
// Wheel interface implementation
public class MRF impements Wheel{
#Override
public int wheelCount(){
......
}...
}
// Wheel interface implementation
#Qualifier("XYZWheel")
public class XYZWheel impements Wheel{
#Override
public int wheelCount(){
......
}...
}
// Car class
public class Car {
#Autowired
#Qualifier("XYZWheel")
Wheel wheel;
}
So, now I have to manually define the specific implementation that I want to Autowire. So, how does dependency injection help here ? I can very well use the new operator to actually instantiate the implementing class that I need instead of relying on Spring to autowire it for me.
So my question is, what are the benefit of autowiring/dependency injection when I have multiple implementing classes and thus I need to manually specify the type I am interested in ?
You don't have to necessarily hard-wire an implementation if you selectively use the qualifier for #Primary and #Conditional for setting up your beans.
A real-world example for this applies to implementation of authentication. For our application, we have a real auth service that integrates to another system, and a mocked one for when we want to do local testing without depending on that system.
This is the base user details service for auth. We do not specify any qualifiers for it, even though there are potentially two #Service targets for it, Mock and Real.
#Autowired
BaseUserDetailsService userDetailsService;
This base service is abstract and has all the implementations of methods that are shared between mock and real auth, and two methods related specifically to mock that throw exceptions by default, so our Real auth service can't accidentally be used to mock.
public abstract class BaseUserDetailsService implements UserDetailsService {
public void mockUser(AuthorizedUserPrincipal authorizedUserPrincipal) {
throw new AuthException("Default service cannot mock users!");
}
public UserDetails getMockedUser() {
throw new AuthException("Default service cannot fetch mock users!");
}
//... other methods related to user details
}
From there, we have the real auth service extending this base class, and being #Primary.
#Service
#Primary
#ConditionalOnProperty(
value="app.mockAuthenticationEnabled",
havingValue = "false",
matchIfMissing = true)
public class RealUserDetailsService extends BaseUserDetailsService {
}
This class may seem sparse, because it is. The base service this implements was originally the only authentication service at one point, and we extended it to support mock auth, and have an extended class become the "real" auth. Real auth is the primary auth and is always enabled unless mock auth is enabled.
We also have the mocked auth service, which has a few overrides to actually mock, and a warning:
#Slf4j
#Service
#ConditionalOnProperty(value = "app.mockAuthenticationEnabled")
public class MockUserDetailsService extends BaseUserDetailsService {
private User mockedUser;
#PostConstruct
public void sendMessage() {
log.warn("!!! Mock user authentication is enabled !!!");
}
#Override
public void mockUser(AuthorizedUserPrincipal authorizedUserPrincipal) {
log.warn("Mocked user is being created: " + authorizedUserPrincipal.toString());
user = authorizedUserPrincipal;
}
#Override
public UserDetails getMockedUser() {
log.warn("Mocked user is being fetched from the system! ");
return mockedUser;
}
}
We use these methods in an endpoint dedicated to mocking, which is also conditional:
#RestController
#RequestMapping("/api/mockUser")
#ConditionalOnProperty(value = "app.mockAuthenticationEnabled")
public class MockAuthController {
//...
}
In our application settings, we can toggle mock auth with a simple property.
app:
mockAuthenticationEnabled: true
With the conditional properties, we should never have more than one auth service ready, but even if we do, we don't have any conflicts.
Something went horribly wrong: no Real, no Mock - Application fails to start, no bean.
mockAuthEnabled = true: no Real, Mock - Application uses Mock.
mockAuthEnabled = false: Real, no Mock - Application uses Real.
Something went horribly wrong: Real AND Mock both - Application uses Real bean.
The best way (I think) to understand Dependency Injection (DI) is like this :
DI is a mecanism that allows you to dynamically replace your
#autowired interface by your implementation at run time. This is the
role of your DI framework (Spring, Guice etc...) to perform this
action.
In your Car example, you create an instance of your Wheel as an interface, but during the execution, Spring creates an instance of your implementation such as MRF or XYZWheel.
To answer your question:
I think it depends on the logic you want to implement. This is not the
role of your DI framework to choose which kind of Wheel you want for
your Car. Somehow you will have to define the interfaces you want to
inject as dependencies.
Please any other answer will be useful, because DI is sometimes source of confusion. Thanks in advance.
I'm trying to figure out the options that I have for the architecture of my API project.
I would like to create an API using JAX-RS version 1.0. This API consumes Remote EJBs (EJB 3.0) from a bigger, old and complex application. I'm using Java 6.
So far, I can do this and works. But I'm not satisfied with the solution. See my packages disposition. My concerns are described after the code:
/api/
/com.organization.api.v1.rs -> Rest Services with the JAX-RS annotations
/com.organization.api.v1.services -> Service classes used by Rest Services. Basically, they only have the logic to transform the DTOs objects from Remote EJBs in JSON. This is separated by API version, because the JSON can be different in each version.
/com.organization.api.v1.vo -> View Objects returned by the Rest Services. They will be transformed in JSON using Gson.
/com.organization.api.services -> Service classes used by versioned Services.
Here we have the lookup for Remote EJBs and some API logic, like validations. This services can be used by any versioned of each Service.
Example of the com.organization.api.v1.rs.UserV1RS:
#Path("/v1/user/")
public class UserV1RS {
#GET
public UserV1VO getUsername() {
UserV1VO userVO = ServiceLocator.get(UserV1Service.class).getUsername();
return userVO;
}
}
Example of the com.organization.api.v1.services.UserV1Service:
public class UserV1Service extends UserService {
public UserV1VO getUsername() {
UserDTO userDTO = getUserName(); // method from UserService
return new UserV1VO(userDTO.getName);
}
}
Example of the com.organization.api.services.UserService:
public class UserService {
public UserDTO getUsername() {
UserDTO userDTO = RemoteEJBLocator.lookup(UserRemote.JNDI_REMOTE_NAME).getUser();
return userDTO;
}
}
Some requirements of my project:
The API have versions: v1, v2, etc.
The different API versions of the same versioned Service can share code: UserV1Service and UserV2Service using UserService.
The different API versions of different versioned Services can share code: UserV1Service and OrderV2Service using AnotherService.
Each version have his own View Object (UserV1VO and not UserVO).
What botters me about the code above:
This ServiceLocator class it not a good approach for me. This class use legacy code from an old library and I have a lot of questions about how this class works. The way to use the ServiceLocator is very strange for me too and this strategy is not good to mock the services for my unit tests. I would like to create a new ServiceLocator or use some dependency injection strategy (or another better approach).
The UserService class is not intended to be used by another "external" service, like OrderService. It's only for the UserVxService. But in the future, maybe OrderService would like to use some code from UserService...
Even if I ignore the last problem, using the ServiceLocator I will need to do a lot of lookups among my code. The chance of create a cyclic dependency (serviceOne lookup serviceTwo that lookup serviceThree that lookup serviceOne) is very high.
In this approach, the VOs, like UserV1VO, could be used in my unversioned services (com.organization.api.services), but this cannot happen. A good architecture don't allow something that is not allowed. I have the idea to create a new project, like api-services and put the com.organization.api.services there to avoid this. Is this a good solution?
So... ideas?
A couple of things that I see:
The UserService should ideally be based off an interface. They seem to have a similar contract, but the only difference are their sources (RemoteEJB, LocalServiceLocator). These should be returning DTOs
UserV1Service extends UserService should not use inheritance but should instead favour composition. Think about what you'd need to do for v2 of the same service. Based on your example, you'd get UserV2Service extends UserService. This is not ideal especially if you end up with abstract methods in your base class that is specific for one version. Then all of a sudden other versioned services need to cater for this.
For the ServiceLocator
You're better off using a dependency injection framework like Spring or perhaps CDI in your case. This would only apply to your own code if your project is new.
For the ones that are hard to unit test, you'd wrap the RemoteEJB calls into it's own interface which makes it easier to mock out. The tests for RemoteEJBs would then be integration tests for this project.
The UserService class is not intended to be used by another "external" service, like OrderService. It's only for the UserVxService. But in the future, maybe OrderService would like to use some code from UserService
There is nothing wrong with Services on the same layer to talk to each other.
In this approach, the VOs, like UserV1VO, could be used in my
unversioned services (com.organization.api.services), but this cannot
happen. A good architecture don't allow something that is not allowed.
I have the idea to create a new project, like api-services and put the
com.organization.api.services there to avoid this. Is this a good
solution?
Just because you "could" do something doesn't mean that you should. While it might seem like a good idea to separate the layer into it's own project; in reality nothing stops a developer from either recreating the same class in that project or including the jar in the classpath and using the same class. I'm not saying that splitting it is wrong, just that it should be split for the right reasons instead of "what if scenarios".
I end up with this solution (thanks #Shiraaz.M):
I remove all extends in my Services and delete the dangerous ServiceLocator class. This inheritances without a good purpose and service locator are both bad ideas. I try to use Guice to inject the dependencies in my REST resources, but it's not so easy to do that in my Jax-rs version. But my services are very simple and easy to create, so my solution was simple:
#Path("/v1/user/")
public class UserV1RS {
private UserV1Service userV1Service;
public UserV1RS() {
this.userV1Service = new UserV1Service();
}
// only for tests!
public UserV1RS(UserV1Service userV1Service) {
this.userV1Service = userV1Service;
}
#GET
public UserV1VO getUsername() {
UserV1VO userVO = this.userV1Service.getUsername();
return userVO;
}
}
And my UserV1Service:
public class UserV1Service {
private UserService userService;
public UserV1Service() {
this.userService = new UserService();
}
// for tests
public UserV1Service(UserService userService) {
this.userService = new UserService();
}
public UserV1VO getUsername() {
UserDTO userDTO = userService.getUserName();
return new UserV1VO(userDTO.getName);
}
}
With this strategy, is easy to user other services with composition.
If necessary, in the future, I will introduce Guice to inject the dependencies in the rest resources and services (at least, in the services) and remove the default constructor from the services that have dependencies and using the same constructor in the tests and production.
About the item 4, I talked with the team and explain how is the organization. The team understand well this and no one break this architecture.
I have developed a generic business service using spring framework. this service must be accessed from different channels i.e. web and mobile channels. each channel has its own business rules that must be added dynamically to generic service functionality. for example if web channel do some additional validation and then call generic service. if mobile channel call service A, then service B then generic service.
My question is what's the best design pattern/way to implement such service mediation without using an ESB?
I think you are looking for decorator pattern where you can attach the additional responsibility at run time.What you can do is
Public Class GenricValidationService extends ValidationService{
public void Validate(){
// do stuff here
}
}
Public Class WebChannelService extends ValidationService{
public WebChannelService (ValidationService validationService){
this.validationService= validationService;
}
ValidationService validationService;
public void Validate(){
genericValidationService.validate();
// extra validation
}
}
similarly
Public Class ServiceB extends ValidationService{
public ServiceB (ValidationService validationService){
this.validationService= validationService;
}
ValidationService validationService;
public void Validate(){
validationService.validate();
// extra validation
}
}
see
Decorator Pattern for IO
Decorate Your Java Code
Let's say I have a HigherLevelBean that depends on LittleService. The LittleService is an interface with two implementations.
There is no static or semi-static way to tell which of the two implementations should be used, it's all dynamic (session scoped, in fact). When request from this user comes in use the LegacyLittleService, and for requests from that other user use NewShinyLittleService.
The services are not going to be that small. They will have their own dependencies that will need to be injected and they will probably come from two different application contexts. Think about using one app over two different schemas/data models.
How can I achieve this kind of runtime dynamic? Ideally with annotation-driven configuration. What are my options, their pros and cons?
You could simply have a factory, where both services are injected:
#Component
public class LittleServiceFactory {
#Autowired
private LegacyLittleService legacy;
#Autowired
private NewShinyLittleService newShiny;
#Autowired
private TheSessionScopedBean theSessionScopedBean;
public LittleService get() {
if (theSessionScopedBean.shouldUseLegacy()) {
return legacy;
}
else {
return newShiny;
}
}
}
Now inject this factory anywhere you want, and call get() to access the appropriate LittleService instance.