JAX-RS 2.0 (Resteasy client) for server to server communication - java

This is a rather generic question but would you use use JAX-RS to communicate between two server services running on potentially two different hosts (leveraging the Resteasy client)?
Or would you stick to the more traditional EJB remote invocation?
I'm a bit worried about the following potential issues:
- maintaining a pool of Http connections - will be per client and not global to the application server
- no optimisation if both services are on the same host (EJB invocations would be local in this case)
- authorisation (credentials): managed by the application itself when configuring the RestClient vs. container managed for EJB
- what else?
Any feedback?
Thanks for your help.

Most implementations of JAX-RS have a client API, so the setup should be easy if you share annotated interfaces between the two projects. Communication may be slower than with other solutions because you have to serialize/deserialize all parameters and responses, usually in formats like XML or JSON. I wouldn't worry to much about optimizing inter-process communication as communicating with localhost is still is way faster than with a remote machine. If you expect to have parts of this API public, REST would be the best option, regardless of performance.
If the communication will only be internal, and you really care about performance, you could use a more specialized framework like Protocol Buffers. JAX-RS is a JavaEE Standard and REST are well established though, which might be more important than performance. For more large, complex, JavaEE based systems, a common solution would be to use messaging and integration frameworks like Apache ActiveMQ and Apache Camel, which also support JAX-WS/JAX-RS frameworks like Apache CXF and should have optimization for inter-process communication. For small applications this seems like overkill though.
I never used EJB so I can't really compare it to other solutions. From what I've heard, the whole EJB approach is way to complex and hasn't been adapted very well in the industry. I also would worry a bit about cross platform compatibility.
I would choose a solution that isn't too complex and easy to setup. One last thing: In my experience, when you expect two application to be running on the same machine so often that you want to optimize it, they probably should have been combined in a single server application in the first place, or maybe one of the servers should have been an optional plugin for the other one.

Related

HTTP vs Thrift in microservices architecture

I'm have just start learning about micro-services and I have a question that I cannot answer myself. (and I'm also a Java based developer)
I have a situation like this:
I have service A (an API service) that call Thrift services (Named T1) for get data.
Then I have a service B that can use data response from A, parse these data and then generate some new data, finally, return it to client.
The question is: Which I should use?
B call API from A and parse (for example JSON data) with HttpClient/ AsyncHttpClient with connection pool or B direct call T1 and repeat what A do?
IMHO, I think Thrift (with connection pooling too) is faster than HTTP call? Am I right?
I see a lot of services that use HTTP for internal like Elastic search, Neo4j, Eureka Netflix, etc ...
So, which one should I use? And why HTTP is so popular for internal use instead of RPC like Thrift, ProtoBuf, ...?
Sorry for my bad english.
Thank you in advance.
HTTP and JSON or XML are generally used because they're platform and language independent. The HTTP API allows for a ReSTful architecture, which has proven to be a scalable model for developing distributed systems.
Historically, RPC-based approaches to distributed systems have shown a number of weak points:
often they're language dependent. Thrift and Protobuf are more interoperable but they're still dependent on fairly specific 3rd party libraries. In comparison, there are many implementations of HTTP clients and XML or JSON data bindings / processors.
by tying together the client and server upgrades can become difficult - the client often must be upgraded at the same time as the server. In a truly distributed network this can be impossible.
RPC is often not a great metaphor in a distributed system. By abstracting the network to an implementation concern they often encourage low-level 'chatty' interfaces which either involve too much network traffic or are not resilient to unreliable networks.
binary transfer formats are more difficult to analyse / debug when something goes wrong.
For these kinds of reasons people tend to choose Rest-with-HTTP-based APIs over proprietary RPC APIs.

Moving from Spring HTTP invoker to load balanced solution

Our application currently uses Spring's HttpInvokerProxyFactoryBean to expose a Java service interface, with POJO requests and responses handled by our single Tomcat server. This solution allows us to have a pure Java client and server, sharing the same Java interface. Due to increased load, we are now looking into the possibility of load balancing across multiple Tomcat instances.
It would be nice if we could make this transition while retaining the same Java interface, as this would minimise the additional development required. Googling seems to suggest that the most common solution for Tomcat load balancing is to use Apache http server together with mod_jk, but I presume this would mean using some communication mechanism other than Spring's HTTP invoker? Is there a better solution which would allow us to retain more of our current code? If not, what would be involved in transitioning between what we have now and Apache/mod_jk?
Any help would be greatly appreciated as I don't have any experience in this matter.

Which common communication protocol to use for communicating with a java layer in a web architecuture

I am planning to design such an architecture for my website as shown in picture above. I am building a core platform in Java that do the communication with DB and other high processing tasks and modules can hook up with the core by means of defined interfaces.
Modules could be anything like, front-end website, email box, admin consoles etc. and could be built on any technology like PHP, Java, ruby on rails etc.
Now tell me which communication protocol should I use for communication between modules and core. Protocol must be something that majority of languages understand and can be process easily in both way communication.
And if somebody find any flaws with such an architecture then kindly suggest a better one that provide great extensibility and flexibility.
I would use HTTP, exposing a REST API on the Core, such as Thilo suggested.
The complexity lies on the trade-offs between RPC (procedural model) of traditional webservices, and the Resource Model, which fits better when using http requests (verbs GET, POST, PUT and DELETE on URI's, complemented with some headers and a body).
Yet, this makes a soft, easy to maitain and portable ditribution. Every single client module may be built on a whole different technology, which allows you to use "the best tool for the job".
Not to mention HTTP advantages for caching, rewriting, load-balancing, ssl, etc.
So basically this is a SOA-like architekture. JavaEE and EJB (3+) or the Spring frameworks come to mind immediately.
The components (your "modules") are usually coupled via SOAP services with an optional Enterprise Service Bus (ESB) between frontend, backend and the composite services.
Whether this is a good match for your case or simply oversized... noone but you can say...

How to avoid network call when REST client and server are on the same server

I have a web application in which two of the major components are the website (implemented in Groovy and Grails) and a backend RESTful web service (implemented using JAX-RS (Jersey) and Spring). Both of these will be running in Glassfish. The website will make calls to the RESTful web service. In many cases, these components will reside on separate servers, so the website will make calls over the network to the RESTful web service. If, however, I run both applications in the same Glassfish server, are there any optimizations that can be made to avoid the network call? In other words, I'm looking for some equivalent of EJB's remote/local interfaces for REST. Thanks!
Don't sweat the network call. Your traffic will generally never leave the local interface so you won't be consuming any bandwidth. You lose a bit of performance from serialization/deserialization, but you'll need to ask yourself if reducing the impact of this is worth developing a complicated proxy architecture. I think it most cases you'll find the answer to be no.
Not sure you will find any trivial solutions: you could of course add your own additional proxy layer, but I really wouldn't worry about it. Local network I/O (localhost or 127.0.0.1) is so heavily optimized anyway that you really won't notice.
Depending on your implementation Spring does support a number of remoting technologies (an old list is at http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html), but you will find that key to all of these is the network transfer: they wrap it up in a variety of different ways but ultimately almost all turnkey remoting technologies drop into the network at some point in time. You may gain SOME efficiency by not having to use HTTP, but you will probably lose some of the loose coupling you gained by using Jersey.
If you are not too afraid to tightly couple maybe you can put the actual objects you are exposing via Jersey into a Glassfish-wide Spring context and invoke the methods directly: much tighter coupling though, so I'd say stick with the HTTP calls.
Yes, you can avoid a network call if your server and client both reside in the same JVM. You should be able to use Jersey Client API to create your own implementation of Connector to override default HTTP calls and handle request/response. Here is the blog that can get you started - http://www.theotherian.com/2013/08/jersey-2.0-server-side-client-in-memory-connector.html
IMHO, an unnecessary network overhead should be avoided at all cost. Even though this overhead would be only a few milliseconds, but while building features for your web application, you would increase such services call and all these milliseconds will add up to a good amount of latency on your application.

Best Java supported server/client protocol?

I'm in the process of writing a client/server application which should work message based. I would like re-use as much as possible instead of writing another implementation and curious what others are using.
Features the library should offer:
client and server side functionality
should work message based
support multi-threading
should work behind load balancer / firewalls
I did several tests with HTTPCore, but the bottom line is that one has to implement both client and server, only the transport layer would be covered. RMI is not an option either due to the network related requirements.
Any ideas are highly appreciated.
Details
My idea is to implement a client/server wrapper which handles the client communication (including user/password validation) and writes incoming requests to a JMS queue:
#1 User --> Wrapper (Check for user/password) --> JMS --> "Server"
#2 User polls Wrapper which polls JMS
Separate processes will handle the requests and can reply via wrapper to the clients. I'd like to use JMS because:
it handles persistence quite well
load balancing - it's easy to handle peaks by adding additional servers as consumer
JMSTimeToLive comes in handy too
Unfortunately I don't see a way to use JMS on it's own, because clients should only have access to their messages and the setup of different users on JMS side doesn't sound feasible either.
Well, HTTP is probably the best supported in terms of client and server code implementing it - but it may well be completely inappropriate based on your requirements. We'll need to actually see some requirements (or at least a vague idea of what the application is like) before we can really advise you properly.
RMI works nicely for us. There are limitations, such as not being able to call back to the client unless you can connect directly to that computer (does not work if client is behind a firewall). You can also easily wrap your communication in SSL or tunnel it over HTTP which can be wrapped in SSL.
If you do end up using this remember to always set the serial version of a class that is distributed to the client. You can set it to 1L when you create it, or if the client already has the class use serialver.exe to discover the existing class's serial. Otherwise as soon as you change or add a public method or variable compatibility with existing clients will break.
static final long serialVersionUID = 1L
EDIT: Each RMI request that comes into the server gets its own thread. You don't have to handle this yourself.
EDIT: I think some details were added later in the question. You can tunnel RMI over HTTP, then you could use a load balancer with it.
I've recently started playing with Hessian and it shows a lot of promise. It natively uses HTTP which makes it simpler than RMI over HTTP and it's a binary protocol which means it's faster than all the XML-based protocols. It's very easy to get Hessian going. I recently did this by embedding Jetty in our app, configuring the Hessian Servlet and making it implement our API interface. The great thing about Hessian is it's simplicity... nothing like JMS or RMI over HTTP. There are also libraries for Hessian in other languages.
I'd say the best-supported, if not best-implemented, client/server communications package for Java is Sun's RMI (Remote Method Invocation). It's included with the standard Java class library, and gets the job done, even if it's not the fastest option out there. And, of course, it's supported by Sun. I implemented a turn-based gaming framework with it several years ago, and it was quite stable.
It is difficult to make a suggestion based on the information given but possibly the use of TemporaryQueues e.g. dynamically created PTP destinations on a per client basis might fit the problem?
Here is a reasonable overview.
Did you tried RMI or CORBA? With both of them you can distribute your logic and create Sessions
Use Spring....Then pick and choose the protocol.
We're standardizing on Adobe's AMF as we're using Adobe Flex/AIR in the client-tier and Java6/Tomcat6/BlazeDS/Spring-Framework2.5/iBATIS2.3.4/ActiveMQ-JMS5.2 in our middle-tier stack (Oracle 10g back-end).
Because we're standardizing on Flex client-side development, AMF and BlazeDS (now better coupled to Spring thanks to Adobe and SpringSource cooperating on the integration), are the most efficient and convenient means we can employ to interact with the server-side.
We also heavily build on JMS messaging in the data center - BlazeDS enables us to bridge our Flex clients as JMS topic subscribers. That is extremely powerful and effective.
Our Flex .swf and Java .class code is bundled into the same .jar file for deployment. That way the correct version of the client code will be deployed to interact with the corresponding middle-tier java code that will process client service calls (or messaging operations). That has always been a bane of client-server computing - making sure the correct versions of the respective tiers are hooked up to each other. We've effectively solved that age-old problem with our particular approach to packaging and deployment.
All of our client-server interactions work over HTTP/HTTPS ports 80 and 443. Even the server-side messaging push we do with BlazeDS bridged to our ActiveMQ JMS message broker.

Categories