I read a lot on this topic on SO and the web but there seem to be problems when dealing with older posts...
I want to expose my EJB business logic to a rest api / inject an ejb into a jersey resource.
Using #EJB works fine but there are people out there suggesting not to use #EJB for local beans.
There are different methods to inject beans in services with #Inject. The easiest (to me) seems to be the following:
#RequestScoped // This line is important!
#Path("service")
public class Rest {
#Inject Bean beany;
#GET
#Produces(MediaType.TEXT_PLAIN)
public String get () {
return beany.saySomething();
}
}
Annotating the resource as cdi does the job.
This discussion brought me to the solution but also states problems (behaviour not specified). I would like to know if the situation is clearer by now.
I'm using the libraries shipped with glassfish 4.
Is there a JEE-7-best-practice-way to achieve this? It's really hard to dig through outdated discussions.
Thanks in advance!
Really good question (+1), currently Java EE 7 is leaner and easier, but, SO is not so updated. There is a new pattern that could be useful for you. Boundary Pattern, yes, is an annotated POJO with #Stateless -preferred in SOA environments- or #Stateful, you start to think...why?.
First, the boundary is the starting point of your application and exposes your services, in the REST phisolophy you should do operations like CRUD (remember : get, post, put, delete), and ejb is just that kind of boundary (Session Facade) that you want for transactional operations (or other powerful services such as Asynchronous, Message Driven Beans, etc...).
So, the EJB is your service and you can inject it wherever you want -rest, soap, RMI, other CDI POJOS-. Thanks to spec now you are able to inject an EJB just with #Inject, and the container will figure out that is really a super powerful EJB!. Leaner?... imposible. Your example is just the right way to go, try to use #Inject as much as possible no matter if we are even talking about exposed beans to JSF pages.
Related
I checked out this SO Post which discusses using RequestMapping in interface. Although the post contains ways to achieve this but it does not mention the pros and cons of doing this.
Architecture wise , is this a bad idea to use controller as interface?
What benefit will we achieve in terms of polymorphism for controller?
There is nothing wrong with putting #RequestMapping on the interface. However make sure you have the right reasons to do it. Polymorphism is probably not a good reason, you will not have a different concrete implementation swapped in at runtime or something like that.
On the other hand, for example, Swagger codegen generates interfaces with #RequestMapping and all the annotations on the methods, fields and return types (together with #Api definitions etc.). Your controller then implements this interface. In this case it makes a lot of sense because it is just enforcing you to respect the Swagger / OpenAPI interface definition originally defined in Yaml. There is a nice side-effect that it makes your controller much cleaner. (Clients can also use the same Yaml to generate their own client stubs for their own language frameworks).
If you opt to do this, make sure you use the latest version of the Spring Framework, because there were some bugs which were fixed only very recently, where not all annotations were being inherited.
https://github.com/spring-projects/spring-framework/issues/15682
If you are stuck with an older Spring version, you might need to repeat the same annotations in your controller.
So, the real reason this would make sense is to enforce the interface contract, and separate the interface definition (together with any information pertaining to the interface) from the actual concrete implementation.
While some arguments against this are that
the request mapping is an implementation detail, or
since you only have one active controller implementation, you might as well put it on the implementation,
(others will probably be provided in different answers soon,)
I was recently faced with the same decision to put jax-rs annotations on the interface or the implementation. So, since everything always "depends" on some context, I want to give you an argument for putting the RequestMapping (or e.g. #Path, etc if not using spring) on the interface:
If you are not using HATEOAS or discovering the endpoints via some other means, the endpoint url, http method, etc. are usually fixed and a static part of your backend API. Therefore, you might as well put it on an interface. This was the case for me because I control both the client and the server side.
The controller usually has only one active implementation, so the reason for doing so is not polymorphism. But your implementation usually has a lot more dependencies than the plain interface. So if you export/provide only your interface to clients (e.g. in a seperate jar/java project/...), you only provide things that the clients really require. In my specific case, I delivered the annotated interface so that a client implementation could can it using a Rest-Client-Library and detect the endpoint paths automatically.
I am implementing a REST API using javax.ws.rs. An implementation goal is to be as secure as possible, so every input should be validated.
For input validation, I am implementing a public class ValidatingHttpRequest that implements HttpServletRequest.
I could identify 11 methods which are even called, all the others now throw UnsupportedOperationException. However some of those methods handle things apparently used by the REST framework. For example my code does not care about headers, but getHeaders gets called. With a lot of reverse engineering I would be able to figure out what headers are used and should be validated, and of course I could do the validation. Possibly with introducing nonoptimal behaviours and maybe some bugs. And there are some similar aspects of the HTTP request.
But no one did this before, possibly someone who actually knows how the REST framework works? Or is it unnecessary, as the framework itself cannot be fooled?
So I am looking for a fully validating HttpServletRequest implementation, or a reasoning why it is unnecessary in this case. Of course I will validate the request body and parameters using the implementation.
I am implementing a REST API using javax.ws.rs. [...] For input validation, I am implementing a public class ValidatingHttpRequest that implements HttpServletRequest.
You are missing the whole point of JAX-RS. In JAX-RS, you deal with annotated resource classes and methods, so you don't need to write "low level" Servlets.
I am looking for a fully validating HttpServletRequest implementation, or a reasoning why it is unnecessary in this case.
You definitely don't want (and don't need) to write a Servlet for validation purposes.
JAX-RS implementations such as Jersey, RESTEasy and Apache CXF support Bean Validation, an annotation-based API to validate Java Beans. You can validate pretty much everything you need, including request headers, parameters and entities.
Check the chapter 7 of the JAX-RS specification, it describes how validation works. However, to integrate Bean Validation with JAX-RS implementations, you want to see the vendor-specific documentantion:
Bean Validation with Jersey
Bean Validation with RESTEasy
Bean Validation with Apache CXF
Injector has to be created at some point. Currently I put in into static ctor of my servlet. But somehow it looks fishy. Furthermore I am calling getInstance directly from goGet to create all sort of classes. That may not be too bad but there are some limitations that I have to work around. So is there a better way?
Check out the Guice Servlet extension. You can #Inject your Guice managed classes directly into your servlets, filter your requests, and also gain access to useful scopes like #RequestScoped and #SessionScoped.
Here is what I am trying to do:
Create an interface (as an example):
#Path( "/" )
public interface Bubbles {
#Get
#Path( "blowBubble" )
#Produces( "text/plain" )
Bubble blowBubble();
}
Said interface should be deployed as a restful web service. I don't particularly care too much about the server side at this point, I mainly concerned with the client.
What I am looking for is a library where I can:
1) Implement the interface, without the interface knowing the full URL (knowing the server and port is obviously necessary (it is in the interface after all)
2) Automatically map 'Bubble' to json across the wire. No adding JAXB to it, no building type converters, etc automatically
My problem is that the 2 libraries I have used do 1 or the other, but not both :(
The Restlet library does 2 but not 1, CXF does 1 but not 2.
Are there any libraries that do both?
I have submitted bugs for both and the CXF dev's seem adamant that 2 should not be a feature - I don't understand why.
Thanks in advance.
EDIT #1:
To clarify my intent, I would like to use REST as the backing transport mechanism for SOA java. This transport should, IMO be transparent; if you have an annotated service interface to adhere to, then the client and server should not need to know anything about each other. They should operate on the contract. Furthermore, this API should be non-intrusive; example: I find that annotating business Objects\Entities with JAXB IS intrusive (what if I can't modify the source?).
I think the best answer I can provide you is pick the best, most active stack and add your changes to make yourself the needed support. I do not believe there is a major player that meets yours needs.
Restlet can implements the interface only if you use their own annotations (see ClientResource#create). I made my own code to handle jax-rs annotations...
For the second point, I don't know about CXF. We were using Restlet with Jackson which implements jax-rs commons interface (JacksonJsonProvider): MessageBodyWriter, MessageBodyReader. Perhaps, you can register this class to CXF. This may work since Jackson can work without annotations.
actually CXF does both , when you use JAX-RS just annotate your method with
#Produces("application/json")
and you will get json output out of box
Lots of frameworks let me expose an ejb as a webservice.
But then 2 months after publishing the initial service I need to change the ejb or any part of its interface. I still have clients that need to access the old interface, so I obviously need to have 2 webservices with different signatures.
Anyone have any suggestions on how I can do this, preferably letting the framework do the grunt work of creating wrappers and copying logic (unless there's an even smarter way).
I can choose webservice framework on basis of this, so suggestions are welcome.
Edit: I know my change is going to break compatibility,and I am fully aware that I will need two services with different namespaces at the same time. But how can I do it in a simple manner ?
I don't think, you need any additional frameworks to do this. Java EE lets you directly expose the EJB as a web service (since EJB 2.1; see example for J2EE 1.4), but with EE 5 it's even simpler:
#WebService
#SOAPBinding(style = Style.RPC)
public interface ILegacyService extends IOtherLegacyService {
// the interface methods
...
}
#Stateless
#Local(ILegacyService.class)
#WebService(endpointInterface = "...ILegacyService", ...)
public class LegacyServiceImpl implements ILegacyService {
// implementation of ILegacyService
}
Depending on your application server, you should be able to provide ILegacyService at any location that fits. As jezell said, you should try to put changes that do not change the contract directly into this interface. If you have additional changes, you may just provide another implementation with a different interface. Common logic can be pulled up into a superclass of LegacyServiceImpl.
I'm not an EBJ guy, but I can tell you how this is generally handled in the web service world. If you have a non-breaking change to the contract (for instance, adding a property that is optional), then you can simply update the contract and consumers should be fine.
If you have a breaking change to a contract, then the way to handle it is to create a new service with a new namespace for it's types. For instance, if your first service had a namespace of:
http://myservice.com/2006
Your new one might have:
http://myservice.com/2009
Expose this contract to new consumers.
How you handle the old contract is up to you. You might direct all the requests to an old server and let clients choose when to upgrade to the new servers. If you can use some amount of logic to upgrade the requests to the format that the new service expects, then you can rip out the old service's logic and replace it with calls to the new. Or, you might just deprecate it all together and fail all calls to the old service.
PS: This is much easier to handle if you create message class objects rather than reusing domain entities.
Ok here goes;
it seems like dozer.sourceforge.net is an acceptable starting-point for doing the grunt work of copying data between two parallel structures. I suppose a lot of web frameworks can generate client proxies that can be re-used in a server context to maintain compatibility.