member EJP commented on my comment here, saying that you could not reuse a Socket that has had a failed connection. I have a tremendous amount of respect for EJP, however, my response is that I find this amazing... If this is true, it would seem to put a severe restriction on the lifespan of any Java app using Sockets - eventually you'd run out, right?
Can anyone clarify the situation, or point to workarounds?
I have figured this out: EJP, you are absolutely correct about this
The issue is with Socket.close() the Java Socket object cannot be reused after this, and since closing either the Input or the OutputStream is going to call close (as per the Javadocs) this is the end point for this object.
However, it seems like it is absolutely possible to create a new Socket object and bind it to the same native socket. The native socket should hopefully have been released by the Java Socket, and be available for reuse, or?
K thanks all for consideration
Related
Recently I had an issue with embedded-postgres library. I tried to manually pick a free port via
new ServerSocket(0).getLocalPort()
but I was hitting a race condition somewhere (my tests hanged on the second test suite startup).
Then I learned that the library has this capability (of using random port) itself and it didn't have the problem I had. I analyzed the source code and the only difference I found was that they do an additional check:
try (ServerSocket socket = new ServerSocket(0)) {
while(!socket.isBound()) {
Thread.sleep(50);
}
return socket.getLocalPort();
}
So after the port was randomized they wait until it is bound. Code I'm referring to.
I'd like to know why this code is there. My understanding of "binding" was that it's equal to "listening" on a given port, yet here it can't be the case, as this code runs before the server starts. In fact, the server will start (and bind itself) to this exact port. This makes me very confused.
The code in the library is a fix to this issue which gives the reason as
..it appears that the EmbeddedPostgres#detectPort method may cause the
issue as it does not wait for the socket to be bound before querying
the port, leading ServerSocket#getLocalPort to return -1 as
documented.
So the call to ServerSocket.getLocalPort() returns -1 if the socket is not bound, and apparently this may happen some time after a call to new ServerSocket(0), though it's unclear under what circumstances. So the code simply waits to make sure it's bound.
However the documentation doesn't say anywhere that new ServerSocket(0) doesn't return a bound socket. The javadoc for ServerSocket(int port) says "Creates a server socket, bound to the specified port.".
However they did have an issue, that's the only fix, and I suppose it resolved the issue...so just very unclear documentation in the JDK?
As for binding vs. listening. Binding is the part where you assign the address and port, it still needs to actively start accepting connections.
I am using CloudBees Syslog Java Client in a simple application to send Syslog messages periodically to an server. All works fine but I wondered - if the TcpSyslogMessageSender-Class is initialized on every loop, it will stop sending new messages after 10 iterations without any exception. I can easily change this and move the object initialization to the constructor of the calling class, but I want to understand why this is. From my point of view I am initializing a clean new object on every iteration. The Garbage Collection should remove the older objects and free the used network resources. But maybe it is not that easy. :)
while(true){
TcpSyslogMessageSender messageSender = new TcpSyslogMessageSender();
messageSender.setDefaultMessageHostname(...);
...
messageSender.sendMessage(msg);
}
Would like to learn about it!
Cheers,
cmax
I am answering to myself: I learned that TcpSyslogMessageSender has a close-method which actually closes the used socket. It seems that it was not possible to open more that 10 sockets from one Java-instance in parallel. Unfortunately, the close-call was not part of the given code example. Now, it is absolutely no problem to initialize and reinitialize the TcpSyslogMessageSender for many times. Hope someone will find this helpful.
I'm trying to build a Java Bittorent client. From what I understand after peers handshake with one another they may start sending messages to each other, often sending messages sporadically.
Using a DataInputStream connection I can read messages, but if I call a read and nothing is on the stream the peers holds. Is there a way I can tell if something is being sent over the stream? Or should I create a new thread that reads the stream for messages continuously from each peer until the client shuts them down shut down?
I think you need to do some major experimenting so that you can start to learn the basics of socket I/O. Trying to answer your question "as is" is difficult, because you don't yet understand enough to ask the question in a manner that it can be answered.
If you want to be able to tell if there is data to read, then you should not use the blocking I/O approach. Instead, you will need to use the APIs known as "NIO", which allow you to "select" a socket that has data to read (i.e. a socket that is associated with a buffer that already has data in it).
This will make much more sense after you write a lot of code and mess it up a few times. The underlying I/O primitives are actually quite primitive (pun intended). In this industry, we just made up lots of complicated terms and function names and API descriptions so that people would think that network communication is magic. It's not. It's generally no more complicated than memcpy().
There is a function in C called select(). In the scenario you've described, you need an equivalent of select in Java. And that is, as cpurdy mentioned, Non-blocking Socket I/O or NIO. Cursory googling returned following links:
http://tutorials.jenkov.com/java-nio/socket-channel.html
http://www.owlmountain.com/tutorials/NonBlockingIo.htm
http://rox-xmlrpc.sourceforge.net/niotut/index.htm
You might want to take a look at the Netty project: http://netty.io/
It is very easy with Netty to get started on network programming.
I am developing a peer to peer application.
in that each peer has one server socket channel and a socket channel,..
Now i need two selectors to handle the connection for server socket channel and socket channel,...
SelectorProvider seems to be a singleton class and it fices only a single instance of Selector,.. which i cant able to use for both the socket channel,..
is there a way to use two selectors in a single instance of a program,..
private ServerSocketChannel svrScktChnl;
private SocketChannel socketChannel;
two selector
public Selector selector=null;
public Selector playerSelector=null;
i am try to intialize these selector separately one for server socket channel and another for socket channel,..
But i cant initialize once again because it throws an error,,..
Now i need two selectors to handle the connection for server socket channel and socket channel,...
No you don't. You can use the same Selector for both, unless for some reason not stated here you want to handle them in separate threads, which is really a violation of everything that NIO stands for.
SelectorProvider seems to be a singleton class
False. SelectorProvider.provider() returns a singleton, but you don't need to use it: there are APIs everywhere that let you specify your own provider. Not that it's relevant, because:
and it fices only a single instance of Selector
False. I don't know what you mean by 'fices', but SelectorProvider.openSelector() returns a new Selector every time you call it, which you could have discovered for yourself without the luxurious technique of posting a question here and waiting possibly forever for a possibly incorrect answer, even if the Provider itself was a singleton, which it isn't.
.. which i cant able to use for both the socket channel,..
No. Clearly you've never actually tried it. You need to understand that this is an empirical science where you are expected to conduct your own experiments. Posting questions on Internet sites and sitting back waiting for the answers is not an efficient use of your time or anybody else's, and it is not calculated to deliver the correct answer as quickly as doing your own work.
it throws an error
You can't seriously expect anyone to help you with as little information as that. Would you accept that as a bug report from a customer?
We have a simple client server architecture between our mobile device and our server both written in Java. An extremely simple ServerSocket and Socket implementation. However one problem is that when the client terminates abruptly (without closing the socket properly) the server does not know that it is disconnected. Furthermore, the server can continue to write to this socket without getting any exceptions. Why?
According to documentation Java sockets should throw exceptions if you try to write to a socket that is not reachable on the other end!
The connection will eventually be timed out by Retransmit Timeout (RTO). However, the RTO is calculated using a complicated algorithm based on network latency (RTT), see this RFC,
http://www.ietf.org/rfc/rfc2988.txt
So on a mobile network, this can be minutes. Wait 10 minutes to see if you can get a timeout.
The solution to this kind of problem is to add a heart-beat in your own application protocol and tear down connection when you don't get ACK for the heartbeat.
The key word here (without closing the socket properly).
Sockets should always be acquired and disposed of in this way:
final Socket socket = ...; // connect code
try
{
use( socket ); // use socket
}
finally
{
socket.close( ); // dispose
}
Even with this precautions you should specify application timeouts, specific to your protocol.
My experience had shown, that unfortunately you cannot use any of the Socket timeout functionality reliably ( e.g. there is no timeout for write operations and even read operations may, sometimes, hang forever ).
That's why you need a watchdog thread that enforces your application timeouts and disposes of sockets that have been unresponsive for a while.
One convenient way of doing this is by initializing Socket and ServerSocket through corresponding channels in java.nio. The main advantage of such sockets is that they are Interruptible, that way you can simply interrupt the thread that does socket protocol and be sure that socket is properly disposed off.
Notice that you should enforce application timeouts on both sides, as it is only a matter of time and bad luck when you may experience unresponsive sockets.
TCP/IP communications can be very strange. TCP will retry for quite a while at the bottom layers of the stack without ever letting the upper layers know that anything happened.
I would fully expect that after some time period (30 seconds to a few minutes) you should see an error, but I haven't tested this I'm just going off how TCP apps tend to work.
You might be able to tighten the TCP specs (retry, timeout, etc) but again, haven't messed with it much.
Also, it may be that I'm totally wrong and the implementation of Java you are using is just flaky.
To answer the first part of the question (about not knowing that the client has disconnected abruptly), in TCP, you can't know whether a connection has ended until you try to use it.
The notion of guaranteed delivery in TCP is quite subtle: delivery isn't actually guaranteed to the application at the other end (it depends on what guaranteed means really). Section 2.6 of RFC 793 (TCP) gives more details on this topic. This thread on the Restlet-discuss list and this thread on the Linux kernel list might also be of interest.
For the second part (not detecting when you write to this socket), this is probably a question of buffer and timeout (as others have already suggested).
I am facing the same problem.
I think when you register the socket with a selector it doesn't throw any exception.
Are you using a selector with your socket?