Moving from Spring HTTP invoker to load balanced solution - java

Our application currently uses Spring's HttpInvokerProxyFactoryBean to expose a Java service interface, with POJO requests and responses handled by our single Tomcat server. This solution allows us to have a pure Java client and server, sharing the same Java interface. Due to increased load, we are now looking into the possibility of load balancing across multiple Tomcat instances.
It would be nice if we could make this transition while retaining the same Java interface, as this would minimise the additional development required. Googling seems to suggest that the most common solution for Tomcat load balancing is to use Apache http server together with mod_jk, but I presume this would mean using some communication mechanism other than Spring's HTTP invoker? Is there a better solution which would allow us to retain more of our current code? If not, what would be involved in transitioning between what we have now and Apache/mod_jk?
Any help would be greatly appreciated as I don't have any experience in this matter.

Related

JAX-RS 2.0 (Resteasy client) for server to server communication

This is a rather generic question but would you use use JAX-RS to communicate between two server services running on potentially two different hosts (leveraging the Resteasy client)?
Or would you stick to the more traditional EJB remote invocation?
I'm a bit worried about the following potential issues:
- maintaining a pool of Http connections - will be per client and not global to the application server
- no optimisation if both services are on the same host (EJB invocations would be local in this case)
- authorisation (credentials): managed by the application itself when configuring the RestClient vs. container managed for EJB
- what else?
Any feedback?
Thanks for your help.
Most implementations of JAX-RS have a client API, so the setup should be easy if you share annotated interfaces between the two projects. Communication may be slower than with other solutions because you have to serialize/deserialize all parameters and responses, usually in formats like XML or JSON. I wouldn't worry to much about optimizing inter-process communication as communicating with localhost is still is way faster than with a remote machine. If you expect to have parts of this API public, REST would be the best option, regardless of performance.
If the communication will only be internal, and you really care about performance, you could use a more specialized framework like Protocol Buffers. JAX-RS is a JavaEE Standard and REST are well established though, which might be more important than performance. For more large, complex, JavaEE based systems, a common solution would be to use messaging and integration frameworks like Apache ActiveMQ and Apache Camel, which also support JAX-WS/JAX-RS frameworks like Apache CXF and should have optimization for inter-process communication. For small applications this seems like overkill though.
I never used EJB so I can't really compare it to other solutions. From what I've heard, the whole EJB approach is way to complex and hasn't been adapted very well in the industry. I also would worry a bit about cross platform compatibility.
I would choose a solution that isn't too complex and easy to setup. One last thing: In my experience, when you expect two application to be running on the same machine so often that you want to optimize it, they probably should have been combined in a single server application in the first place, or maybe one of the servers should have been an optional plugin for the other one.

rmi and webservice

currently i have an web app build with Strus2 and Spring (IoC, Transactions), and i want to split this into 2 apps; one client which will contain only the web part and one core services that will be accessed via webservices and/or rmi.
I have a dilemma in what technology should i use for "glue", because i like the fact that webservices can be accessed by any clients (php,.net,...,mobile), but as i understand java rmi is faster then webservices.
I was thinking to expose the functionality via webservices and rmi in the same time... but i do not know how to do that.
Also in my current app i have a ajax action that is executed each second from the client to the server, and in this new configuration i think that there will be some performance penalties due to this.
How should i "attack" this situation ?
Thanks,
but as i understand java rmi is faster then webservices.
Why do you think this? Do you have a citation to bolster this claim?
Both RMI and webservices are using TCP/IP; both incur similar network latency. The former uses Java or CORBA serialization to send messages over the wire; the latter uses either HTTP (for REST) or XML over HTTP (for SOAP or RPC-XML).
The relative speed is far more dependent on what those services are doing and how you code them.
I would prefer a web service because simple and open win. You are restricted to RMI/CORBA clients if you use RMI.
Nice. You are running Spring and you already have all you need. Just throw in a few jars (spring webservices and related jars) and you should be good to go.
Please refer :
http://static.springsource.org/spring/docs/2.5.4/reference/ejb.html
http://static.springsource.org/spring/docs/2.5.4/reference/remoting.html

Configuring JBoss to create one process per http session?

In a web application I am developing, I am using a third party Java library (JPL) that uses JNI to connect to an external application: a Prolog engine.
For the nature of my problem, I need to have one Prolog engine per http session. But as far as I know the library I am using only let me work with one Prolog engine per java VM.
In order to solve this issue I came up with the idea of trying to configure JBoss to launch a new process (instead of just a new thread) per each http session, a bit like CGI where normally one process is started per http request.
In this way, certain servlets could use the required JNI based library without having to worry about synchronization issues in its side, since as I expect (and hope not be wrong about that), each of them will have an independent Prolog engine with different state (e.g., different asserted Prolog facts).
Is possible to configure JBoss (or other servlet container) in this way? Any feedback or pointer will be highly appreciated!.
To my knowledge this is not possible. However looking at the documentation http://www.swi-prolog.org/packages/jpl/java_api/high-level_interface.html#Multi-Threaded%20Queries the only problem seems to be that you can have only one open query per VM.

How to avoid network call when REST client and server are on the same server

I have a web application in which two of the major components are the website (implemented in Groovy and Grails) and a backend RESTful web service (implemented using JAX-RS (Jersey) and Spring). Both of these will be running in Glassfish. The website will make calls to the RESTful web service. In many cases, these components will reside on separate servers, so the website will make calls over the network to the RESTful web service. If, however, I run both applications in the same Glassfish server, are there any optimizations that can be made to avoid the network call? In other words, I'm looking for some equivalent of EJB's remote/local interfaces for REST. Thanks!
Don't sweat the network call. Your traffic will generally never leave the local interface so you won't be consuming any bandwidth. You lose a bit of performance from serialization/deserialization, but you'll need to ask yourself if reducing the impact of this is worth developing a complicated proxy architecture. I think it most cases you'll find the answer to be no.
Not sure you will find any trivial solutions: you could of course add your own additional proxy layer, but I really wouldn't worry about it. Local network I/O (localhost or 127.0.0.1) is so heavily optimized anyway that you really won't notice.
Depending on your implementation Spring does support a number of remoting technologies (an old list is at http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html), but you will find that key to all of these is the network transfer: they wrap it up in a variety of different ways but ultimately almost all turnkey remoting technologies drop into the network at some point in time. You may gain SOME efficiency by not having to use HTTP, but you will probably lose some of the loose coupling you gained by using Jersey.
If you are not too afraid to tightly couple maybe you can put the actual objects you are exposing via Jersey into a Glassfish-wide Spring context and invoke the methods directly: much tighter coupling though, so I'd say stick with the HTTP calls.
Yes, you can avoid a network call if your server and client both reside in the same JVM. You should be able to use Jersey Client API to create your own implementation of Connector to override default HTTP calls and handle request/response. Here is the blog that can get you started - http://www.theotherian.com/2013/08/jersey-2.0-server-side-client-in-memory-connector.html
IMHO, an unnecessary network overhead should be avoided at all cost. Even though this overhead would be only a few milliseconds, but while building features for your web application, you would increase such services call and all these milliseconds will add up to a good amount of latency on your application.

Best Java supported server/client protocol?

I'm in the process of writing a client/server application which should work message based. I would like re-use as much as possible instead of writing another implementation and curious what others are using.
Features the library should offer:
client and server side functionality
should work message based
support multi-threading
should work behind load balancer / firewalls
I did several tests with HTTPCore, but the bottom line is that one has to implement both client and server, only the transport layer would be covered. RMI is not an option either due to the network related requirements.
Any ideas are highly appreciated.
Details
My idea is to implement a client/server wrapper which handles the client communication (including user/password validation) and writes incoming requests to a JMS queue:
#1 User --> Wrapper (Check for user/password) --> JMS --> "Server"
#2 User polls Wrapper which polls JMS
Separate processes will handle the requests and can reply via wrapper to the clients. I'd like to use JMS because:
it handles persistence quite well
load balancing - it's easy to handle peaks by adding additional servers as consumer
JMSTimeToLive comes in handy too
Unfortunately I don't see a way to use JMS on it's own, because clients should only have access to their messages and the setup of different users on JMS side doesn't sound feasible either.
Well, HTTP is probably the best supported in terms of client and server code implementing it - but it may well be completely inappropriate based on your requirements. We'll need to actually see some requirements (or at least a vague idea of what the application is like) before we can really advise you properly.
RMI works nicely for us. There are limitations, such as not being able to call back to the client unless you can connect directly to that computer (does not work if client is behind a firewall). You can also easily wrap your communication in SSL or tunnel it over HTTP which can be wrapped in SSL.
If you do end up using this remember to always set the serial version of a class that is distributed to the client. You can set it to 1L when you create it, or if the client already has the class use serialver.exe to discover the existing class's serial. Otherwise as soon as you change or add a public method or variable compatibility with existing clients will break.
static final long serialVersionUID = 1L
EDIT: Each RMI request that comes into the server gets its own thread. You don't have to handle this yourself.
EDIT: I think some details were added later in the question. You can tunnel RMI over HTTP, then you could use a load balancer with it.
I've recently started playing with Hessian and it shows a lot of promise. It natively uses HTTP which makes it simpler than RMI over HTTP and it's a binary protocol which means it's faster than all the XML-based protocols. It's very easy to get Hessian going. I recently did this by embedding Jetty in our app, configuring the Hessian Servlet and making it implement our API interface. The great thing about Hessian is it's simplicity... nothing like JMS or RMI over HTTP. There are also libraries for Hessian in other languages.
I'd say the best-supported, if not best-implemented, client/server communications package for Java is Sun's RMI (Remote Method Invocation). It's included with the standard Java class library, and gets the job done, even if it's not the fastest option out there. And, of course, it's supported by Sun. I implemented a turn-based gaming framework with it several years ago, and it was quite stable.
It is difficult to make a suggestion based on the information given but possibly the use of TemporaryQueues e.g. dynamically created PTP destinations on a per client basis might fit the problem?
Here is a reasonable overview.
Did you tried RMI or CORBA? With both of them you can distribute your logic and create Sessions
Use Spring....Then pick and choose the protocol.
We're standardizing on Adobe's AMF as we're using Adobe Flex/AIR in the client-tier and Java6/Tomcat6/BlazeDS/Spring-Framework2.5/iBATIS2.3.4/ActiveMQ-JMS5.2 in our middle-tier stack (Oracle 10g back-end).
Because we're standardizing on Flex client-side development, AMF and BlazeDS (now better coupled to Spring thanks to Adobe and SpringSource cooperating on the integration), are the most efficient and convenient means we can employ to interact with the server-side.
We also heavily build on JMS messaging in the data center - BlazeDS enables us to bridge our Flex clients as JMS topic subscribers. That is extremely powerful and effective.
Our Flex .swf and Java .class code is bundled into the same .jar file for deployment. That way the correct version of the client code will be deployed to interact with the corresponding middle-tier java code that will process client service calls (or messaging operations). That has always been a bane of client-server computing - making sure the correct versions of the respective tiers are hooked up to each other. We've effectively solved that age-old problem with our particular approach to packaging and deployment.
All of our client-server interactions work over HTTP/HTTPS ports 80 and 443. Even the server-side messaging push we do with BlazeDS bridged to our ActiveMQ JMS message broker.

Categories