Is it a bad idea to use #RequestMapping in interface? - java

I checked out this SO Post which discusses using RequestMapping in interface. Although the post contains ways to achieve this but it does not mention the pros and cons of doing this.
Architecture wise , is this a bad idea to use controller as interface?
What benefit will we achieve in terms of polymorphism for controller?

There is nothing wrong with putting #RequestMapping on the interface. However make sure you have the right reasons to do it. Polymorphism is probably not a good reason, you will not have a different concrete implementation swapped in at runtime or something like that.
On the other hand, for example, Swagger codegen generates interfaces with #RequestMapping and all the annotations on the methods, fields and return types (together with #Api definitions etc.). Your controller then implements this interface. In this case it makes a lot of sense because it is just enforcing you to respect the Swagger / OpenAPI interface definition originally defined in Yaml. There is a nice side-effect that it makes your controller much cleaner. (Clients can also use the same Yaml to generate their own client stubs for their own language frameworks).
If you opt to do this, make sure you use the latest version of the Spring Framework, because there were some bugs which were fixed only very recently, where not all annotations were being inherited.
https://github.com/spring-projects/spring-framework/issues/15682
If you are stuck with an older Spring version, you might need to repeat the same annotations in your controller.
So, the real reason this would make sense is to enforce the interface contract, and separate the interface definition (together with any information pertaining to the interface) from the actual concrete implementation.

While some arguments against this are that
the request mapping is an implementation detail, or
since you only have one active controller implementation, you might as well put it on the implementation,
(others will probably be provided in different answers soon,)
I was recently faced with the same decision to put jax-rs annotations on the interface or the implementation. So, since everything always "depends" on some context, I want to give you an argument for putting the RequestMapping (or e.g. #Path, etc if not using spring) on the interface:
If you are not using HATEOAS or discovering the endpoints via some other means, the endpoint url, http method, etc. are usually fixed and a static part of your backend API. Therefore, you might as well put it on an interface. This was the case for me because I control both the client and the server side.
The controller usually has only one active implementation, so the reason for doing so is not polymorphism. But your implementation usually has a lot more dependencies than the plain interface. So if you export/provide only your interface to clients (e.g. in a seperate jar/java project/...), you only provide things that the clients really require. In my specific case, I delivered the annotated interface so that a client implementation could can it using a Rest-Client-Library and detect the endpoint paths automatically.

Related

Use of Interfaces on a service layer

In our project architecture we are using a classic MVC pattern including a classic service layer (opening the transaction and calling the DAO layer).
For each service we have an implementation and his interface. But to be honest, I'm pretty sure that for one service and his interface, we will never have more than one implementation. So ok maybe it's more clear to have the public method declared in the interface helping to know what the service does, but an interface is used to have multiple implementation and if we know that we won't have more than one implementation, should we keep them?
From the documentation:
Implementing an interface allows a class to become more formal about
the behavior it promises to provide. Interfaces form a contract
between the class and the outside world, and this contract is enforced
at build time by the compiler.
If you know that you will only have one implementation, the implementation itself will define the contract, so you can remove the interfaces.
But writing an interface could help you to better define the contract and also, you could need at a given point to write a mock for the service, in such a case you would benefit from the use of interfaces.
i think this is a good approach for keeping interfaces.
reasons:
1. say you want to write junits for the same with a different implementations ex. inspite of getting data from database you want to get data from a separate datasource then a different implementation will suffice.

Is a Java class with a generic parameter still a POJO?

Example:
class MyClass<S> {
}
Is the above class a POJO?
EDIT: The question has been put on hold so let me explain further. Firstly, the question is very clear and precise. Secondly, I think it is important since numerous docs says things like (to quote the google docs at https://developers.google.com/eclipse/docs/endpoints-addentities):
In the Endpoint methods, the return value type cannot be simple type such as String or int. The return value needs to be a POJO, an array or a Collection.
In such a case I would want to know exactly what classes I can use without having to go through a tedious trial-and-error process.
The term POJO (plain old java object) became popular around the time of early version of J2EE (now called JEE) and Enterprise Java Beans (EJB).
EJB sought to extend the java-beans philosophy of reusable, component driven architectures by providing enterprise service abstractions - things like database access, security, messaging.
Unfortunately, these early attempts required extending base classes that could only be used within the context of an application server. This had a lot of problems, for example it made testing a very cumbersome and slow process.
As a counterpoint to this POJOs emerged which aimed to provide enterprise services without having to extend base classes. Spring used Dependency Injection and Aspect Oriented Programming for this, and quickly became popular as classes could now easily be unit and integration tested outside of the heavy app server.
The idea behind POJO is that your class should extend from the business domain rather than an infrastructure domain. Therefore yes, there's no reason why a POJO can't use generics, as long as it honors this philosophy.
Every Java Class which doesnt extend prespecified classes and doesnt implement prespecified Interfaces. Also a POJO (Plain Old Java Object) doesnt have a prespecified Annotation.
This means your example is a POJO.

Service and DAO always implement interfaces

In all the MVC projects I've seen, "service" and "DAO" classes always implemented their own interfaces. But almost all the times, I haven't seen a situation in which having this interface has been useful.
Is there any reason to use interfaces in these cases? What may be the consequence of not using interfaces in "service" and "DAO" classes? I can't imagine any consequences.
Spring is an Inversion of Control container. This, in one sense, means that the implementation of classes you use doesn't fall on the application but on its configuration. If you have a class that needs a UserRepository to store User instances, it'd be something like
class UserService {
#Autowired
private UserRepository userRepository;
}
interface UserRepository {
List<User> getUsers();
User findUserBySIN(String SIN);
List<User> getUsersInCountry(Long couyntryId);
}
And you would have a bean declared for it
<bean class="com.myapp.UserRepositoryHibernateImpl">
...
</bean>
Notice this bean is UserRepositoryHibernateImpl which would implement UserRepository.
At some point in the future of the world, the Hibernate project stops being supported and you really need a feature that is only available on Mybatis so you need to change implementations. Because your UserService class is using a UserRepository declared with the interface type, only the methods visible on the interface are visible to the class. So changing the actual polymorphic type of userRepository doesn't affect the rest of the client code. All you need to change (excluding creating the new class) is
<bean class="com.myapp.future.UserRepositoryMyBatisImpl">
...
</bean>
and your application still works.
There are lots of arguments in favour of interfaces, see Google.
I can added to the points other people mentioned:
Imagine you change your DAO implementations from Hibernate to iBatis. Dependency to interface rather than implementation would be a great help for the service layer.
If you use AOP or proxies using JDK dynamic proxies then your classes must implement interfaces. This is not the case for CGLIB.
In the service layer if you want to release your methods to other clients to call, giving them "interface as a contract" would make more sense rather than implementations.
If you ever want to separate services.jar from daos.jar then having interfaces on your daos would save the services.jar from recompile in case daos.jar changes.
In short, it is just good to have interfaces!
The interface-based implementation helps in mocking them in the test suite. In our project, while testing the service layer, we mock the DAOs and provide hard coded data instead of really connecting to the DB. The same argument applies to service layer as well.
Using interfaces early on makes your application scalable-ready and consequences of not using it is sacrificing your application's scalability.
I've been asking myself the exact same question recently, feeling that creating an interface even if I know there's ever going to be a single implementing class is silly and adds to the bloat (every Java programmer who tried a more pragmatic language will know the feeling). That's yet another compilation module, often only created to satisfy one's inner dogmatist.
Spring seems to have evolved into a module/component oriented framework where the programmer only creates the blocks and the framework assembles it all together. This is why having more than one bean matching the criteria is a problem and complicates things (you end up using qualifiers which kind of kill the purpose of DI). Programmers will naturally try to avoid type ambiguities to minimise amount of required configuration, ideally making any given block fit into only one "slot".
In my opinion, DI's biggest advantage is not that it makes it easy to change implementations (by simply changing type of declared class in the config XML), but that it allows easier separation of dependencies, thus making it easier to test each component in separation. You don't need one-child interfaces for that.
Since reverse-engineering a class to extract its interface would be a purely mechanical task, I wouldn't worry about "what if I need to add another implementation?" argument.
Disclaimer: opinion of a small-to-mid applications developer; I'm sure the situation changes with large projects.

How to handle internal calls on Spring/EJB/Mockito... proxies?

As you many know when you proxy an object, like when you create a bean with transactional attributes for Spring/EJB or even when you create a partial mock with some frameworks, the proxies object doesn't know that, and internal calls are not redirected, and then not intercepted either...
That's why if you do something like that in Spring:
#Transactionnal
public void doSomething() {
doSomethingInNewTransaction();
doSomethingInNewTransaction();
doSomethingInNewTransaction();
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void doSomethingInNewTransaction() {
...
}
When you call doSomething, you expect to have 3 new transactions in addition to the main one, but actually, due to this problem you only get one...
So i wonder how do you do to handle these kind of problems...
I'm actually in a situation where i must handle a complex transactional system, and i don't see any better way than splitting my service into many small services, so that I'm sure to pass through all the proxies...
That bothers me a lot because all the code belongs to the same functional domain and should not be split...
I've found this related question with interesting answers:
Spring - #Transactional - What happens in background?
Rob H says that we can inject the spring proxy inside the service, and call proxy.doSomethingInNewTransaction(); instead.
It's quite easy to do and it works, but i don't really like it...
Yunfeng Hou says this:
So I write my own version of CglibSubclassingInstantiationStrategy and
proxy creator so that it will use CGLIB to generate a real subclass
which delegates call to its super rather than another instance, which
Spring is doing now. So I can freely annotate on any methods(as long
as it is not private), and from wherever I call these methods, they
will be taken care of. Well, I still have price to pay: 1. I must list
all annotations that I want to enable the new CGLIB sub class
creation. 2. I can not annotate on a final method since I am now
generating subclass, so a final method can not be intercepted.
What does he mean by "which spring is doing now"? Does this mean internal transactional calls are now intercepted?
What do you think is better?
Do you split your classes when you need some transactional granularity?
Or do you use some workaround like above? (please share it)
I'll talk about Spring and #Transactional but the advise applies for many other frameworks also.
This is an inherent problem with proxy based aspects. It is discussed in the spring documentation here:
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/aop.html#aop-understanding-aop-proxies
There are a number of possible solutions.
Refactor your classes to avoid the self-invocation calls that bypass the proxy.
The Spring documentation describes this as "The best approach (the term best is used loosely here)".
Advantages of this approach are its simplicity and that there are no ties to any framework. However, it may not be appropriate for a very transactional heavy code base as you'd end up with many trivially small classes.
Internally in the class get a reference to the proxy.
This can be done by injecting the proxy or with hard coded " AopContext.currentProxy()" call (see Spring docs above.).
This method allows you to avoid splitting the classes but in many ways negates the advantages of using the transactional annotation. My personal opinion is that this is one of those things that is a little ugly but the ugliness is self contained and might be the pragmatic approach if lots of transactions are used.
Switch to using AspectJ
As AspectJ does not use proxies then self-invocation is not a problem
This is a very clean method though - it is at the expense of introducing another framework. I've worked on a large project where AspectJ was introduced for this very reason.
Don't use #Transactional at all
Refactor your code to use manual transaction demarcation - possibly using the decorator pattern.
An option - but one that requires moderate refactoring, introducing additional framework ties and increased complexity - so probably not a preferred option
My Advice
Usually splitting up the code is the best answer and can also be good thing for seperation of concerns also. However, if I had a framework/application that heavily relied on nested transactions I would consider using AspectJ to allow self-invocation.
As always when modelling and designing complex use cases - focus on understandable and maintainable design and code. If you prefer a certain pattern or design but it clashes with the underlying framework, consider if it's worth a complex workaround to shoehorn your design into the framework, or if you should compromise and conform your design to the framework where necessary. Don't fight the framework unless you absolutely have to.
My advice - if you can accomplish your goal with such an easy compromise as to split out into a few extra service classes - do it. It sounds a lot cheaper in terms of time, testing and agony than the alternative. And it sure sounds a lot easier to maintain and less of a headache for the next guy to take over.
I usually make it simple, so I split the code into two objects.
The alternative is to demarcate the new transaction yourself, if you need to keep everything in the same file, using a TransactionTemplate. A few more lines of code, but not more than defining a new bean. And it sometimes makes the point more obvious.

How to expose an EJB as a webservice that will later let me keep client compatibility when ejb changes?

Lots of frameworks let me expose an ejb as a webservice.
But then 2 months after publishing the initial service I need to change the ejb or any part of its interface. I still have clients that need to access the old interface, so I obviously need to have 2 webservices with different signatures.
Anyone have any suggestions on how I can do this, preferably letting the framework do the grunt work of creating wrappers and copying logic (unless there's an even smarter way).
I can choose webservice framework on basis of this, so suggestions are welcome.
Edit: I know my change is going to break compatibility,and I am fully aware that I will need two services with different namespaces at the same time. But how can I do it in a simple manner ?
I don't think, you need any additional frameworks to do this. Java EE lets you directly expose the EJB as a web service (since EJB 2.1; see example for J2EE 1.4), but with EE 5 it's even simpler:
#WebService
#SOAPBinding(style = Style.RPC)
public interface ILegacyService extends IOtherLegacyService {
// the interface methods
...
}
#Stateless
#Local(ILegacyService.class)
#WebService(endpointInterface = "...ILegacyService", ...)
public class LegacyServiceImpl implements ILegacyService {
// implementation of ILegacyService
}
Depending on your application server, you should be able to provide ILegacyService at any location that fits. As jezell said, you should try to put changes that do not change the contract directly into this interface. If you have additional changes, you may just provide another implementation with a different interface. Common logic can be pulled up into a superclass of LegacyServiceImpl.
I'm not an EBJ guy, but I can tell you how this is generally handled in the web service world. If you have a non-breaking change to the contract (for instance, adding a property that is optional), then you can simply update the contract and consumers should be fine.
If you have a breaking change to a contract, then the way to handle it is to create a new service with a new namespace for it's types. For instance, if your first service had a namespace of:
http://myservice.com/2006
Your new one might have:
http://myservice.com/2009
Expose this contract to new consumers.
How you handle the old contract is up to you. You might direct all the requests to an old server and let clients choose when to upgrade to the new servers. If you can use some amount of logic to upgrade the requests to the format that the new service expects, then you can rip out the old service's logic and replace it with calls to the new. Or, you might just deprecate it all together and fail all calls to the old service.
PS: This is much easier to handle if you create message class objects rather than reusing domain entities.
Ok here goes;
it seems like dozer.sourceforge.net is an acceptable starting-point for doing the grunt work of copying data between two parallel structures. I suppose a lot of web frameworks can generate client proxies that can be re-used in a server context to maintain compatibility.

Categories