What do I need to worry about when doing callbacks in RMI? I just need a simple client notification mechanism to avoid excessive polling.
I found an online example and it looks pretty straightforward, the client just implements an interface that extends Remote (like the server does) and passes it to the server, which can then call back its methods. I'm guessing the remote callback can occur on any thread, so I have to assume it will be asynchronous to my client application's normal threads. What else is there?
Two things.
RMI callbacks almost certainly won't work through firewalls
RMI callbacks execute on a different thread from the original client call to the server. You can get unexpected synchronization deadlocks if you don't take that into account.
Related
I'm currently creating a Java Web Application that will use Websockets. The server would be a Java class annotated with #ServerEndpoint called Server.java and the client would be a web browser so I'll most definitely access the Websocket endpoint using Javascript.
I need a Websocket because I want to notify the client(s) whenever something in the server changes. We have a utility class called the EventManager that manages all the events that happen in a subsystem. I plan to register Server.java as a dependent of EventManager so that whenever EventManager has something new, it will notify all of its dependents that this particular event happened.
Is this good practice? I thought about using AJAX/long polling but I believe to server-to-client behavior needs to be observed. And besides, there's no way for me to get the events in the database, I have to rely on the EventManager to notify my Websocket endpoint.
Example scenario that I want:
Client A connects to Server.java
Client B connects to Server.java (by now, there will be two sessions active)
EventManager detects an event and notifies all instances of Server.java.
Server.java sends a message to all active Websocket sessions.
Browser retrieves data sent through Websocket using Javascript and displays it.
I have been told to use Node.js for this but I am still pushing for a Java implementation since:
I have completely no experience with Node.js
Our EventManager class will be such a pain to convert into
Javascript for Node.js
It will work the way you propose. A few pointers:
Remove zombie Servers (Server.java) from your event manager.
This will work fine with a single server machine, what happens with more? Client A and Client B may connect to different http processes.
When a client refreshes a page you lose the connection
You can use the fact that the ServerEndpoint can receive query params to pass state.
Connections drop. Remember to implement a keep alive.
You can only have one message encoder/decoder per websocket. Makes sense but requires a few ifs in the #OnMessage method.
Here is a sample implementation of a chat server with multiple servers using websockets.
We currently have a server that is creating a new thread for each request he gets, so basically the server gets data that he needs to save later.
Now we got the request to implement RMI where we can observe what kind of data is currently being saved.
How can I handle this the best way? Shall I make an RMI Server for each thread? Can I have multiple instances of the same service on the same address and let my observer register to all of them?
I'm using the google example for the RMI access:
https://sites.google.com/site/jamespandavan/Home/java/sample-remote-observer-based-on-rmi#TOC-Running-the-server-client
You don't need a remote object per thread, because you won't even have visible threads. A remote object is already multi-threaded and already takes care of its own incoming connections. You will be throwing stuff away rather than adding.
You might need a remote object per client, if you wamt them to behave like sessions, but that's a different story.
I have implemented a server in Java, upon receiving data from some client it simply forwards the data to all other clients (including the sender). I'm happy with my OO-design, I wrap all sockets in classes that provide 'callbacks'. These are called when some data are ready (or when the socket closes) -- using this design I could easily implement a simple TLV protocol to atomically send packets: the callback is not called until a full packet is received.
Now, I use the java.io package blocking I/O calls to the socket streams (and make them appear 'asynchronous' through those callbacks). So I use threads inside my socket wrapper classes: when a socket is opened, that function returns a Runnable implementation that, when run, will do the blocking calls to the InputStream, buffer data and eventually call the callback.
=> In a client application, I simply launch this Runnable in a Thread instance, because it's just one thread.
=> In my server, I submit all Runnable implementations I get upon creating new sockets (i. e. when accepting new clients) into a ThreadPoolExecutor. (FYI: the callbacks of the sockets simply put the received packets in a BlockingQueue. A single, separate (non-pooled) "dispatcher" Thread instance constantly takes the packets from this queue and writes them to all sockets currently connected to the server.)
QUESTION: This all works great, however I'm unsure about my use of the ThreadPoolExecutor, because the Runnable instances submitted are almost always blocking. Will the ThreadPoolExecutor react to this? Or will the pooled threads simply block? Because, if all pooled threads are all blocking while executing their Runnable and next, a new Runnable is submitted, then what? Suspend the new Runnable? That's not good, because then the newly connected client will have zero responsiveness until some older client disconnects. If by contrast the thread pool chooses to spawn a new thread to handle the Runnable, then I actually get a thread-per-client scenario.
I want the thread pool to 'preempt' the blocking threads and use them to handle other sockets, like an operating system that suspends I/O bound processes and doesn't schedule them again until their I/O is complete. Is that at all possible, or will I have to rewrite everything using nio in order to do this? (if nio is required, could you point out where I should start reading?)
Thanks in advance!
About the ThreadPoolExecutor: it depends. An Executors.newCachedThreadPool() will just create new threads for new Runnables. See also this question and the accepted answer. But you will end up with a thread-per-client scenario.
Nio prevents the thread-per-client scenario (if there are many clients sending relative small messages with pauses in between, see also (the summary of) this article), I advice against trying to build your own nio clone.
Implementing nio from the ground up is not easy, a tutorial can be found here. It might be easier to use a nio server like Netty.
Another alternative is to use a technology designed to handle many clients that send and receive small messages. It takes some time to learn and setup, but I managed to get a Tomcat WebSockets server talking with a Jetty WebSocket client pretty quickly. A rewrite to use this technology could be less work.
Suppose I have an RMI Client-Server application. Clients connect to the Server and at some point the Server starts a task. During the task Clients are doing some work, but at some other moment the Server must interrupt this work without letting the Clients finish it. Clients are implemented as Threads and the simplest solution looks like calling thread.interrupt(), but this does not work in RMI. Is there any other method or some workaround to resolve this problem? Thanks in advance.
You can implement a two-way remoting scheme in which, when a client performs the lookup for the server remote object and creates the local instance, it calls a method by which it passes a remote object of its own to the server. Then, when the server has finished its task, it can notify the client by calling a method in the remote object received from the client.
I'm writing a multiplayer/multiroom game (Hearts) in java, using RMI and a centralized server.
But there's a problem: RMI Callbacks will not work beacause clients are Natted and Firewalled. I basically need the server to push data updates to clients, possibly without using polling and without using sockets (I would code at an higher level)
In your opinion, what's the best solution for realizing this kind of architecture? Is an ajax application the only solution?
You say that you don't want polling, but AJAX is exactly that. You can look at Comet but it's hard to escape polling anyway (e.g. Comet itself uses polling underneath).
You could use a peer to peer framework such as JXTA.
I can suggest two main techniques.
The server has a method getUpdates, callable by clients. The method returns the control to the client when there is an update to show.
When Clients perform the registration, they give the server a callback remote object
Since this object is not registered in any rmi registry, there should no be any issue with natted clients.
I'm not sure how(if) ajax works for a non-browser-based app. You could just maintain your own pool of SocketConnections open for the duration of the application with a Thread per connection.
If you need to scale to a lot of concurrent connections, look to a non-blocking I/O framework like Apache Mina or Netty (related SO post: Netty vs Apache MINA).