Shutting down rmi server cleanly - java

I am running in to some trouble when shutting down the server component and was hoping to get some help.
My server code looks as follows, it has a method to shut down the server
Server
private final String address = "127.0.0.1";
private Registry registry;
private int port = 6789;
public RmiServer() throws RemoteException {
try {
registry = LocateRegistry.createRegistry(port);
registry.rebind("rmiServer", this);
} catch (RemoteException e) {
logger.error("Unable to start the server. Exiting the application.", e);
System.exit(-1);
}
}
public void shutDownServer() throws RemoteException {
int succesful = 0;
try {
registry.unbind("rmiServer");
UnicastRemoteObject.unexportObject(this, true);
Thread.sleep(1000);
} catch (NotBoundException e) {
logger.error("Error shutting down the server - could not unbind the registry", e);
succesful = -1;
} catch (InterruptedException e) {
logger.info("Unable to sleep when shutting down the server", e);
succesful = -1;
}
catch (AccessException e) {
logger.info("Access Exception", e);
succesful = -1;
}
catch (UnmarshalException e) {
System.out.println(e.detail.getMessage());
logger.info("UnMarshall Exception", e);
succesful = -1;
}
catch (RemoteException e) {
System.out.println(e.detail.getMessage());
logger.info("Remote Exception", e);
succesful = -1;
}
logger.info("server shut down gracefully");
System.exit(succesful);
}
My client connects fine, no issues, so to shutdown i created a new application, copied the client code to connect and then call the close method on the server
Shutdown
public class Shutdown {
private String serverAddress = "127.0.0.1";
private String serverPort = "6789";
private ReceiveMessageInterface rmiServer;
private Registry registry;
public Shutdown(){
try {
registry = LocateRegistry.getRegistry(serverAddress, (new Integer(serverPort)).intValue());
rmiServer = (ReceiveMessageInterface) (registry.lookup("rmiServer"));
logger.info("Client started correctly");
rmiServer.shutDownServer();
System.exit(0);
}
catch (UnmarshalException e ){
logger.error("Unmarshall exception. Exiting application", e);
System.exit(-1);
}
catch (RemoteException e) {
logger.error("Remote object exception occured when connecting to server. Exiting application", e);
System.exit(-1);
} catch (NotBoundException e) {
logger.error("Not Bound Exception occured when connecting to server. Exiting application", e);
System.exit(-1);
}
}
No matter what i try i keep getting the following exception;
ERROR com.rmi.client.RMIClient - Unmarshall exception. Exiting application
java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is:
java.net.SocketException: Connection reset
at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source)
at sun.rmi.server.UnicastRef.invoke(Unknown Source)
at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unknown Source)
at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source)
at $Proxy0.shutDownServer(Unknown Source)
at com.rmi.shutdown.Shutdown.<init>(Shutdown.java:31)
at com.rmi.shutdown.Shutdown.main(Shutdown.java:52)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(Unknown Source)
at java.io.BufferedInputStream.fill(Unknown Source)
at java.io.BufferedInputStream.read(Unknown Source)
at java.io.DataInputStream.readByte(Unknown Source)
... 7 more
I belive this might be due to the fact that the client is not properly disconnected and just gets "cut off" but i am unsure how else to disconnect the server side?
please can some one advise.
thanks

Unexport with force = true doesn't abort calls in progress. In general it will let in-progress calls run to completion. Your shutDownServer method is almost correct in that it unregisters the remote reference and unexports it. What it does next doesn't work, though. First, it sleeps for one second. This keeps the call in progress and keeps the client waiting for a reply. Then the shutdown code exits the server JVM without returning from the remote call. This closes client's connection while it's still awaiting a reply. That's why the client gets the connection reset exception.
To shut down cleanly, unregister the remote object, unexport it with force = true (as you've done) and then simply return. This will send a reply to the client, letting its remote call complete, and it will then exit. Back on the server, after the last in-progress call has completed, if there are no other objects exported, and if there's nothing else keeping the JVM around (such as non-daemon threads) the JVM will exit. You need to let RMI finish up its server-side processing instead of calling System.exit().

The system is doing exactly what you told it to do. You told it to unexport itself, and you set the 'force' parameter to true, which aborts calls in progress, so it unexported itself and aborted the call in progress. Just ignore it, or if you insist on a clean response to the shutdown client, have the server start a new thread for the unexport operation, with a short delay so the shutdown call can return to the client.

Related

Java NIO client causes file descriptor leakage only when remote TCP server is down

The below program acts as TCP client and uses NIO to open socket to a remote server, as below
private Selector itsSelector;
private SocketChannel itsChannel;
public boolean getConnection(Selector selector, String host, int port)
{
try
{
itsSelector = selector;
itsChannel = SocketChannel.open();
itsChannel.configureBlocking(false);
itsChannel.register(itsSelector, SelectionKey.OP_CONNECT);
itsChannel.connect(new InetSocketAddress(host, port));
if (itsChannel.isConnectionPending())
{
while (!itsChannel.finishConnect())
{
// waiting until connection is finished
}
}
itsChannel.register(itsSelector, SelectionKey.OP_WRITE);
return (itsChannel != null);
}
catch (IOException ex)
{
close();
if(ex instanceof ConnectException)
{
LOGGER.log(Level.WARNING, "The remoteserver cannot be reached");
}
}
}
public void close()
{
try
{
if (itsChannel != null)
{
itsChannel.close();
itsChannel.socket().close();
itsSelector.selectNow();
}
}
catch (IOException e)
{
LOGGER.log(Level.WARNING, "Connection cannot be closed");
}
}
This program runs on Red Hat Enterprise Linux Server release 6.2 (Santiago)
When number of concurrent sockets are in establishment phase, file descriptor limit reaches a max value and I see below exception while trying to establish more socket connections.
java.net.SocketException: Too many open files
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
This happens only when the remote Node is down, and while it is up, all is fine.
When the remote TCP server is down, below exception is thrown as is handled as IOException in the above code
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
Is there any way to forcefully close the underlying file descriptor in this case.
Thanks in advance for all the help.
private Selector itsSelector;
I cannot see the point of this declaration. You can always get the selector the channel is registered with, if you need it, which you never do. Possibly you are leaking Selectors?
itsChannel.configureBlocking(false);
itsChannel.register(itsSelector, SelectionKey.OP_CONNECT);
Here you are registering for OP_CONNECT but never making the slightest use of the facility.
itsChannel.connect(new InetSocketAddress(host, port));
Here you are starting a pending connection.
if (itsChannel.isConnectionPending())
It is. You just started it. The test is pointless.
{
while (!itsChannel.finishConnect())
{
// waiting until connection is finished
}
}
This is just a complete waste of time and space. If you don't want to use the selector to detect when OP_CONNECT fires, you should call connect() before setting the channel to non-blocking, and get rid of this pointless test and loop.
itsChannel.register(itsSelector, SelectionKey.OP_WRITE);
return (itsChannel != null);
itsChannel cannot possibly be null at this point. The test is pointless. You would be better off allowing the IOExceptions that can arise to propagate out of this method, so that the caller can get some idea of the failure mode. That also places the onus on the caller to close on any exception, not just the ones you're catching here.
catch (IOException ex)
{
close();
if(ex instanceof ConnectException)
{
LOGGER.log(Level.WARNING, "The remoteserver cannot be reached");
}
}
See above. Remove all this. If you want to distinguish ConnectException from the other IOExceptions, catch it, separately. And you are forgetting to log anything that isn't a ConnectException.
public void close()
{
try
{
if (itsChannel != null)
{
itsChannel.close();
itsChannel.socket().close();
itsSelector.selectNow();
The second close() call is pointless, as the channel is already closed.
catch (IOException e)
{
LOGGER.log(Level.WARNING, "Connection cannot be closed");
}
I'm glad to see you finally logged an IOException, but you're not likely to get any here.
Don't write code like this.

IllegalStateException when HTTP Streaming using ResponseBodyEmitter in Spring-MVC

I'm using the newly added HTTP Streaming feature with ResponseBodyEmitter in Spring 4.2.0.BUILD-SNAPSHOT.
I would like to implement a long running persistent TCP connection on an undending stream of data between a (possibly java) client and server until the client breaks the connection. I would like to avoid using the websocket protocol.
If a client breaks the connection while streaming, a runtime IllegalStateException is thrown. I would like to handle this gracefully and cleanup the emitter. Short of catching a runtime exception, is there any way to gracefully handle this?
I have to specify an artifically high timeout value on the emitter for a "persistent" connection. Can I set no timeout?
The webapp is deployed on apache-tomcat-7.0.62.
Relevant code as follows:
#RequestMapping(value = "stream", method = RequestMethod.GET)
public ResponseBodyEmitter handleStreaming() {
ResponseBodyEmitter emitter = new ResponseBodyEmitter(timeout);
emitters.add(emitter);
emitter.onCompletion(new Runnable() {
#Override
public void run() {
emitters.remove(emitter);
}
});
emitter.onTimeout(new Runnable() {
#Override
public void run() {
emitters.remove(emitter);
}
});
return emitter;
}
.
while (true) {
for (Iterator<ResponseBodyEmitter> iterator = emitters.iterator(); iterator.hasNext();) {
ResponseBodyEmitter emitter = iterator.next();
try {
emitter.send("data...", MediaType.TEXT_PLAIN);
} catch (IOException | IllegalStateException e) {
LOGGER.error(e);
iterator.remove();
}
}
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
LOGGER.error(e);
}
}
Logs:
INFO: An error occurred in processing while on a non-container thread. The connection will be closed immediately
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:215)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:480)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:119)
at org.apache.coyote.http11.AbstractHttp11Processor.action(AbstractHttp11Processor.java:801)
at org.apache.coyote.Response.action(Response.java:172)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:363)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:331)
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:101)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:297)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
at org.springframework.util.StreamUtils.copy(StreamUtils.java:106)
at org.springframework.http.converter.StringHttpMessageConverter.writeInternal(StringHttpMessageConverter.java:109)
at org.springframework.http.converter.StringHttpMessageConverter.writeInternal(StringHttpMessageConverter.java:40)
at org.springframework.http.converter.AbstractHttpMessageConverter.write(AbstractHttpMessageConverter.java:193)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitterReturnValueHandler$HttpMessageConvertingHandler.sendInternal(ResponseBodyEmitterReturnValueHandler.java:157)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitterReturnValueHandler$HttpMessageConvertingHandler.send(ResponseBodyEmitterReturnValueHandler.java:150)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitter.sendInternal(ResponseBodyEmitter.java:180)
at org.springframework.web.servlet.mvc.method.annotation.ResponseBodyEmitter.send(ResponseBodyEmitter.java:164)
....
[ERROR] [02/07/15 18:11 PM] [Controller$TestResponseBodyEmitter:74] - java.lang.IllegalStateException: The request associated with the AsyncContext has already completed processing.
Command:
curl http://localhost:8080/myapp/stream -v -N
data...data...
Ctrl-C
According to the Javadoc of the ResponseBodyEmitter's constructor (found here).
Create a ResponseBodyEmitter with a custom timeout value. By default
not set in which case the default configured in the MVC Java Config or
the MVC namespace is used, or if that's not set, then the timeout
depends on the default of the underlying server.
Therefore do give a timeout when you create the c instance.
PS: In my environment ResponseBodyEmitter#getTimeout() returned null; this does not mean that there is an infinite timeout. On the contrary after 5-10 sec the connection timed out.

Java serversocket not detecting lost connection

I have a socket client (on android phone) and server (on PC) both on a wifi network and the server successfully reads data from the client.
However, when I turn off the wifi on the phone the server read just hangs, whereas I was hoping some error would be thrown.
I do have setSoTimeout set on the server, but the read is not timing out.
On the PC netstat still shows an established connection
netstat -na | grep 6668
TCP 192.168.43.202:6668 192.168.43.26:43076 ESTABLISHED
Is there a way to tell if the client host has disappeared, or getting the read to time out?
Here is the server read
if (ss.isConnected()) {
try {
readData();
} catch (java.net.SocketTimeoutException ex) {
logger.warning(ex.toString());
} catch (InterruptedIOException ex) {
logger.warning(ex.toString());
} catch (IOException ex) {
logger.log(Level.WARNING, "Data communication lost will close streams - IOEx - socket status {0}", ss.socketStatus());
closeStreams();
} catch (Exception ex) {
logger.log(Level.WARNING, "Data communication lost will close streams - Ex - socket status {0}", ss.socketStatus());
closeStreams();
}
}
Where readData is,
public void readData() throws IOException {
for (int i = 0; i < data.length; i++) {
data[i] = ss.readDouble();
}
}
ss.readDouble() is,
public double readDouble() throws IOException {
return in.readDouble();
}
And the server connection,
public void connect() throws IOException {
if (serverSocket == null || serverSocket.isClosed()) {
init();
}
logger.log(Level.INFO, "Wait on " + serverSocket.getLocalPort());
server = serverSocket.accept();
serverSocket.close();
logger.log(Level.INFO, "Connected to {0}", server.getRemoteSocketAddress());
out = new DataOutputStream(server.getOutputStream());
in = new DataInputStream(server.getInputStream());
}
Make a timeout, so let's say no data has been sent for 10 minutes, close it in 60 seconds!
Setting a timeout for socket operations
The answer for this question may help you.
This is nature of TCP connection, not java sockets per se. If the remote peer disconects with broken connection, how should your server know that the peer simply has no data to send?
Writting on closed socket will cause exception, read will simply block if client doesnt end tcp connection properly, for the reason above.
If you go through socket API, you will find option to set timeout ( before proceeding with blocking operation).
You could also consider TCP KEEP Alive, which is also exposed by the Socket API.
// Edit: additional information as per the OP comment
When your client connects to server, you create a client socket to communicate with the peer. Your server socket is the one at which you are listening for new client connections. It is the client socket at which you specify keep alive or read timeout because this is the socket from which you read/write.
// your server is actually reference to ClientSocket
server = serverSocket.accept();
// keep alive duh
server.setKeepAlive(true);
serverSocket.close();

RabbitMQ Java client - How to sensibly handle exceptions and shutdowns?

Here's what I know so far (please correct me):
In the RabbitMQ Java client, operations on a channel throw IOException when there is a general network failure (malformed data from broker, authentication failures, missed heartbeats).
Operations on a channel can also throw the ShutdownSignalException unchecked exception, typically an AlreadyClosedException when we tried to perform an action on the channel/connection after it has been shut down.
The shutting down process happens in the event of "network failure, internal failure or explicit local shutdown" (e.g. via channel.close() or connection.close()). The shutdown event propagates down the "topology", from Connection -> Channel -> Consumer, and when the Channel it calls the Consumer's handleShutdown() method gets called.
A user can also add a shutdown listener which is called after the shutdown process completes.
Here is what I'm missing:
Since an IOException indicates a network failure, does it also initiate a shutdown request?
How does using auto-recovery mode affect shutdown requests? Does it cause channel operations to block while it tries to reconnect to the channel, or will the ShutdownSignalException still be thrown?
Here is how I'm handling exceptions at the moment, is this a sensible approach?
My setup is that I'm polling a QueueingConsumer and dispatching tasks to a worker pool. The rabbitmq client is encapsulated in MyRabbitMQWrapper here. When an exception occurs polling the queue I just gracefully shutdown everything and restart the client. When an exception occurs in the worker I also just log it and finish the worker.
My biggest worry (related to Question 1): Suppose an IOException occurs in the worker, then the task doesn't get acked. If the shutdown does not then occur, I now have an un-acked task that will be in limbo forever.
Pseudo-code:
class Main {
public static void main(String[] args) {
while(true) {
run();
//Easy way to restart the client, the connection has been
//closed so RabbitMQ will re-queue any un-acked tasks.
log.info("Shutdown occurred, restarting in 5 seconds");
Thread.sleep(5000);
}
}
public void run() {
MyRabbitMQWrapper rw = new MyRabbitMQWrapper("localhost");
try {
rw.connect();
while(!Thread.currentThread().isInterrupted()) {
try {
//Wait for a message on the QueueingConsumer
MyMessage t = rw.getNextMessage();
workerPool.submit(new MyTaskRunnable(rw, t));
} catch (InterruptedException | IOException | ShutdownSignalException e) {
//Handle all AMQP library exceptions by cleaning up and returning
log.warn("Shutting down", e);
workerPool.shutdown();
break;
}
}
} catch (IOException e) {
log.error("Could not connect to broker", e);
} finally {
try {
rw.close();
} catch(IOException e) {
log.info("Could not close connection");
}
}
}
}
class MyTaskRunnable implements Runnable {
....
public void run() {
doStuff();
try {
rw.ack(...);
} catch (IOException | ShutdownSignalException e) {
log.warn("Could not ack task");
}
}
}

netty 3.5.7 Channel.close() throws exception cause closefuture not notified

I'm digging a bug in my netty program:I used a heartbeat handler between the server and client,when client system rebooting,the heartbeat handler in server side will be aware of timeout and then close the Channel,but sometimes the listener registered in Channel's CloseFuture never be notified,that's weird.
After digging netty 3.5.7 source code,I figure out that the only way a Channel's CloseFuture be notified is through AbstractChannel.setClosed();May be this method not be executed when Channel is closed,see below:
NioServerSocketPipelineSink:
private static void close(NioServerSocketChannel channel, ChannelFuture future) {
boolean bound = channel.isBound();
try {
if (channel.socket.isOpen()) {
channel.socket.close();
Selector selector = channel.selector;
if (selector != null) {
selector.wakeup();
}
}
// Make sure the boss thread is not running so that that the future
// is notified after a new connection cannot be accepted anymore.
// See NETTY-256 for more information.
channel.shutdownLock.lock();
try {
if (channel.setClosed()) {
future.setSuccess();
if (bound) {
fireChannelUnbound(channel);
}
fireChannelClosed(channel);
} else {
future.setSuccess();
}
} finally {
channel.shutdownLock.unlock();
}
} catch (Throwable t) {
future.setFailure(t);
fireExceptionCaught(channel, t);
}
}
in some platform channel.socket.close() may throw IOException,that means channel.setClosed() may never executed,so the listener registered in CloseFuture may not be notified.
Here is my question:Do you ever encounter this problem? Is the analysis right?
I figure out it's my heartbeat handler cause the problem:never timeout,so never close the channel,below is running in a timer :
if ((now - lastReadTime > heartbeatTimeout)
&& (now - lastWriteTime > heartbeatTimeout)) {
getChannel().close();
stopHeartbeatTimer();
}
where lastReadTime and lastWriteTime are updated like below:
public void writeComplete(ChannelHandlerContext ctx, WriteCompletionEvent e)
throws Exception {
lastWriteTime = System.currentTimeMillis();
super.writeComplete(ctx, e);
}
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
lastReadTime = System.currentTimeMillis();
super.messageReceived(ctx, e);
}
Remote client is Windows xp,current server is Linux,both jdk1.6.
I think the writeComplete still invoked internally after remote client's system is rebooting,although messageReceived not invoked,no IOExceptoin is thrown during this period.
I will redesign the heartbeat handler,attaching a timestamp and a HEART_BEAT flag in heartbeat packet,when the peer side received the packet,send back the packet with the same timestamp and a ACK_HEART_BEAT flag,when the current side received this ack packet,use this timestamp to update lastWriteTime.

Categories