Java Non-Blocking HTTP Server - java

I have written an application using embedded Jetty that makes network calls to other services.
I presume that the serving threads are idle whilst waiting for the network calls to complete.
Is there any way to have a worker thread that switches between requests to perform work that can be done at the current time and then when the network calls return also handle that? A request would be returned when all work has been completed for it.
I know this is a common paradigm, and I have used it for non-blocking TCP networking, but I'm unsure as to how to achieve this on a Java HTTP server whilst also waiting on external results.
Any links or explanations are appreciated.
Thanks
Update:
I'm using Membase and ElasticSearch (the only network calls). Membase returns "Future" objects and ElasticSearch returns "ListenableActionFuture". I'd like to be able to continue processing on a thread in response to these objects being returned.

You may take a look at Deft, which is single threaded, asynchronous, event driven web server.

Netty is a java library that allows you to do asynchronous networking.
http://www.jboss.org/netty
Netty supports http, but it is a fairly low level library.
A higher level library is finangle by twitter,
http://twitter.github.com/finagle/
Finangle is built on top of netty, but supports connection pooling, load balancing, and has a lot of other features. Finangle supports http.

If you want to do work at the same time as IO, I suggest you add a thread pool to perform the work. It is possible to re-use the existing threads but its a lot of extra work for possibly too little benefit.

Related

Can we create asynchronous REST service with Restlet framework?

I am trying to understand the asynchronous component of Restlet framework[2.3].
From official site
Fully multi-threaded design with per-request Resource instances to reduce thread-safety issues when developing applications.
Supports asynchronous request processing, decoupled from IO operations. Unlike the Servlet API, the Restlet applications don't have a direct control on the outputstream, they only provide output representation to be written by the server connector.
Supports non-blocking NIO modes to decouple the number of connections from the number of threads.
so I don't understand the difference between the first and second sentences. For me the requests are handled by some connector(web container? like Jetty or NIO) and if connector is asynchronous, we're done. And finally I don't understand "non-blocking NIO mode", is NIO the same thing that NIO connector?
Maybe somebody can give an example of an asynchronous server based on Restlet(Official site contains a little information about this topic)?
Thanks.

Does synchronous servlet processing make sense for a distributed server-side application

The scope/context of this question:
I am to develop a Java/Java EE based distributed server-side application that is scalable (scale-up, rather than scale-out).
My application comprises of servlets utilizing multiple instances of distributed back-end services for processing client requests. If I need to achieve more throughput, I want to be able to just add more instances of these distributed services (JVMs on the same or another machine) and (expect to) see an increase in throughput.
To achieve this, I was thinking of a loosely-coupled asynchronous system.
I thought I would use Async Servlets (servlet 3.0) and an application-managed thread-pool that places client requests on JMS queues, which would be picked by one of the distributed service instances and processed. The responses can be relayed back to the client using JMS, from the service instances to a response-thread in the servlet container.
However, an asynchronous system seems to be (obviously) more complex than a synchronous one (ex: error-handling and error-relaying to the client, request tracking etc). I am also worried about the future maintainability of the design/code.
So, a question arises Does it make sense to do this synchronously, while still remaining distributed, scalable and loosely-coupled ?
If the answer is yes, then pls also share possible ways of achieving this (while remaining 'constructive').
If I can do this well in a synchronous way, then it will simplify the entire system.
I dont want to add complexity to the system unnecessarily.
(Assuming it makes sense) One possible implementation I could think of is using RMI.
For ex: A service registry for the distributed service instances to register and have a load-balancer distribute the RMI calls across all the available instances. But it feels to be a old-generation solution. Are there any better options available ?
Edit:
Other details about the scope of this question:
The client-side is browser-based does not demand an asynchronous
server-side.
I dont need server-push.
At any time, I wont have more outstanding requests than max-worker-threads of the popular web servers (even Apache).
For the above reasons, the use-cases mentioned in a related question dont seem to apply to my scenario.
Loose coupling and distribution are independent of whether processing is synchronous or asynchronous.
With scalability, the matter is more complex. In a synchronous model, you will need one thread per pending request. If you need to scale to really high load (say, thousands of concurrent requests per server), an asynchronous model may scale better. To reap the benefit of that however, the entire processing, starting from the handling of incoming connections, needs to be done in an asynchronous way. There is little point to have a synchronous request processing thread delegate to a asynchronous thread pool, and blocking until that thread pool has computed the result - after all, the request thread could just as well have done the work himself.
If you need to return a response, I'd therefore go for synchronous request processing whenever scalabity permits (which it usually does).
Edit:
There are numerous ways to talk to the distributed backend servers. You might simply use EJB (which, if I recall correctly, uses RMI under the hood). Or, you might use webservices behind a load balancer.

Duplex streaming in Java EE

I'm looking for a full duplex streaming solution with Java EE.
The situation: client applications (JavaFX) read data from a peripheral device. This data needs to be transferred in near real-time to a server for processing and also get the response back asynchronously, all while it keeps sending new data for processing.
Communication with the server needs to have an overhead as low as possible. Data coming in is basically some sensor data and after processing it is turned in what can be described as a set of commands.
What I've looked into:
A TCP/IP server (this is a non-Java EE approach).This would be the obvious solution. Two connections opened in parallel from each client app: one for upstream data and one for downstream data.
Remote & stateless EJBs. This would mean that there's no streaming involved and that I pack sensor data in smaller windows (1-2 seconds worth of sensor data) which I then send to the server for processing and get the processing result as a response. For this approach, while it is scalable, I am not sure how fast it will be considering I have to make a request each 1-2 seconds. I still need to test this but I have my doubts.
RMI. Is this any different than EJBs, technically?
Two servlets (up/down) with long polling. I've not done this before, so it's something to be tested.
For now I would like to test the performance for my approach #2. The first solution will work for sure, but I'm not too fond of having a separate server (next to Tomcat, where I already have something running).
However, meanwhile, it would be worth knowing if there are any other Java specific (EE or not) technologies that could easily solve this. If anyone has an idea, then please share it.
This looks like a good place for using JMS. Instead of stateless EJBs, you will probably be using Message-Driven Beans.
This gives you an approach similar to your first solution, using two message queues instead of TCP/IP connections. JMS makes your communications fully asynchronous and is low-overhead in the sense that your clients can send messages as fast as they can regardless of how fast your server can consume them. You also get delivery guarantees and other JMS goodness.
Tomcat does not come with JMS, however. You might try TomEE or integrate your existing Tomcat with a JMS implementation like ActiveMQ.
There are numerous options you could try. Appropriate solutions depend on the nature of your application, communication protocol, data transfer type, control you have over the client and server and firewall restrictions on client server routes.
There's not much info on this in your question, but given what you have provided, you may like to look at netty as it is quite general purpose and flexible and seems to fit your requirements. Netty also includes a duplex websocket implementation. Note that a netty based solution may be more complex to implement and require more background study than some other solutions (such as jms).
Yet another possible solution in GraniteDS, which advertises a JavaFX client integration and multiple server integrations for full duplex client/server communication, though I have not used it. GraniteDS uses comet (your two asynchronous servlets with long polling model) with the Active Message Format for data which you may be familiar with from Flex/Flash.
Have you looked at websockets as a solution? They are known to keep persistent connections and hence the asynchronous response will be quick.

RMI alternatives for bidirectional asynchronous calls and callbacks through firewalls or NAT

I'm writing a server-client architecture based game in Java.
For design reasons, I would like to use asynchronous calls for passing client actions to the server, and also asynchronous callbacks for passing the result(s) of said actions back to the client. Asynchronous calls allow buffering of client actions. Queued buffering allows simple, basically one threaded processing of client actions.
At the moment, my server and client code is pretty symmetric. They create a registry, then export and bind themselves.
Asynchronicity is achieved by buffering the incoming actions or results in a ConcurrentLinkedQueue. Actual processing is done by a thread running at regular intervals.
However, this current architecture does not work when clients are firewalled or behind a NAT. In this case the server simply can not reach clients to push results to them.
Furthermore, in this current architecture the server does not know which client sent a given action, unless a redundant layer of authentication or session handling is introduced. This allows forged actions and cheating.
I've been thinking about possible solutions but haven't found a proper one:
Client pull instead of server push. There could be a method on the server that the clients call periodically to fetch their results. However, this approach seems very ugly, it introduces additional delays, bandwidth and timing issues. Does not solve action forgery either. Direct notifications are also very much preferred.
TCP connections by themselves allow bidirectional communication, and can definitely identify clients, so RMI or JRemoting might be hacked to support it, but I'm don't know how, and I'm not aware of any existing solution.
Message passing. I'm not sure whether message passing frameworks support authentication / sessions or client identification. I'd definitely lose remote methods though.
I believe the correct solution would be to find a remote method invocation framework that supports all of the above.
So in a nutshell, I'm searching for a way to:
call the server asynchronously or pass a message to it
call the client asynchronously or pass a message to it, even behind firewall or NAT
identify the client sending the action
preferably be able to call methods, not just pass messages
keep the ability to easily test it with JUnit and Mockito (multiple clients per machine)
Are there any remote method invocation frameworks with support for these? Which is the best?
I don't know why you would insist on using a RMI or anything similar, as it is by definition unidirectional. But I had to learn a similar lesson...for one of my client-server systems, I implemented something similar to what you have now, using RMI and long-polls. That turned out to be a horrible mess, that just getting worse and worse.
Then I found out about the wonderful world of publish-subscribe frameworks. These are a natural way to build a client-server application without the need to implement a lot of your own plumbing. Moreover, these frameworks support things like auto keepalives, time syncing, session authentication and permissions, and tons of other stuff that you wouldn't want to implement yourself.
For my project, I ripped out all of my own work and replaced it with CometD, which supports both Java and browser (Javascript) clients, and couldn't be happier. It would certainly support all your needs - asynchronous communication initiated from either side, client identification (and many other features), and clients behind NAT would not be a problem once a connection is established. Easy to write tests too, and the whole framework has been scaled up to be able to handle 100k clients, which would be impossible for RMI.
I would strongly suggest that you consider dropping the requirement to be able to call methods remotely. Methods are inherently one-sided, but they still require a call and return. It's much better to design your system with event-driven programming.
Update: I've since moved to the world of web apps, specifically using Meteor.

Non-blocking queue of HTTP POST requests with persistence

Before we develop our custom solution, I'm looking for some kind of library, which provides:
Non-blocking queue of HTTP requests
with these attributes:
Persisting requests to avoid it's loss in case of:
network connectivity interruption
application quit, forced GC on background app
etc..
Possibility of putting out all these fields:
Address
Headers
POST data
So please, is there anything usable right know, what could save us whole day on developing this?
Right now we don't need any callbacks on completed request and neither saving result data, as there won't be such.
In my humble opinion, a good and straightforward solution would be to develop your own layer (which shouldn't be so complicated) using a sophisticated framework for connection handling, such as Netty https://netty.io/ , together with a sophisticated framework for asynchronous processing, such as Akka http://akka.io/
Let's first look inside Netty support for http at http://static.netty.io/3.5/guide/#architecture.8 :
4.3. HTTP Implementation
HTTP is definitely the most popular protocol in the Internet. There are already a number of HTTP implementations such as a Servlet container. Then why does Netty have HTTP on top of its core?
Netty's HTTP support is very different from the existing HTTP libraries. It gives you complete control over how HTTP messages are exchanged at a low level. Because it is basically the combination of an HTTP codec and HTTP message classes, there is no restriction such as an enforced thread model. That is, you can write your own HTTP client or server that works exactly the way you want. You have full control over everything that's in the HTTP specification, including the thread model, connection life cycle, and chunked encoding.
And now let's dig inside Akka. Akka is a framework which provides an excellent abstraction on the top of Java concurrent API, and it comes with API in Java or Scala.
It provides you a clear way to structure your application as a hierarchy of actors:
Actors communicate through message passing, using immutable message so that you have not to care about thread-safety
Actors messages are stored in message boxes, which can be durable
Actors are responsible for supervising their children
Actors can be run on one or more JVM and can communicate using a wide numbers of protocols
It provides a lightweight abstraction for asynchronous processing , Future, which is easier to use then Java Futures.
It provides other fancy stuff such as Event Bus, ZeroMQ adapter, Remoting support, Dataflow concurrency, Scheduler
Once you become familiar with the two frameworks, it turns out that what you need can easily be coded through them.
In fact, what you need is an http proxy coded in Netty, that upon a request receival sends immediately a message to an Akka Actor of type FSM (http://doc.akka.io/docs/akka/2.0.2/java/fsm.html) which using a durable mailbox (http://doc.akka.io/docs/akka/2.0.2/modules/durable-mailbox.html )
Here is a link to open-source library that was a Master Thesis of a student at Czech Technical University in Prague. It is very large and powerful library and mainly focuses on location. The good thing about it, though, is that it omitted the headers and other -ish that REST has.
It is the latest fork and hopefully it will give you at least inspiration for "own" solution.
how about those concurrent collections:
http://mcg.cs.tau.ac.il/projects/concurrent-data-structures
i hope that the license is ok .
You'll want to have a look to these to posts. (added at the end of the document)
Very basically an approach that works in a proficient way for me is to separate requests from the queue and the executor.
Requests are executed as Runnables or Callables. Inherit from them to create different kind of requests to your API or service. Set them up there adding headers and or body prior to to executing them.
Enqueue those requests in a queue (choose which fits better for you - I'd say LinkedBlockingQueue will make the job) linked to an executor from within a bound service and calling them from your activity or any other scope. If you don't need to get responses and callbacks you can avoid using Guava for listening to futures or create your own callbacks.
I'll stay tuned. If you need more depth I can post some specific pieces of code. There's the source of a basic example in the first link though.
http://ugiagonzalez.com/2012/08/03/using-runnables-queues-and-executors-to-perform-tasks-in-background-threads-in-android/
http://ugiagonzalez.com/2012/07/02/theres-life-after-asynctasks-in-android/
Update:
You can create another queue for those requests that were impossible to execute.
One approach that comes to my mind would be to add all your failed requests to the retry queue. The retry queue would be trying to re-run these tasks while the phone still thinks that there's any kind of internet connection available. In the request object you can set a max number of retrials and compare it to a currentRetry number increasing it in every retrial.
Mmm this might be interesting. I'll definitely think about including that in my library.

Categories