What's the proper way to continuously read socket messages through DataInputStream? - java

I'm trying to build a Java Bittorent client. From what I understand after peers handshake with one another they may start sending messages to each other, often sending messages sporadically.
Using a DataInputStream connection I can read messages, but if I call a read and nothing is on the stream the peers holds. Is there a way I can tell if something is being sent over the stream? Or should I create a new thread that reads the stream for messages continuously from each peer until the client shuts them down shut down?

I think you need to do some major experimenting so that you can start to learn the basics of socket I/O. Trying to answer your question "as is" is difficult, because you don't yet understand enough to ask the question in a manner that it can be answered.
If you want to be able to tell if there is data to read, then you should not use the blocking I/O approach. Instead, you will need to use the APIs known as "NIO", which allow you to "select" a socket that has data to read (i.e. a socket that is associated with a buffer that already has data in it).
This will make much more sense after you write a lot of code and mess it up a few times. The underlying I/O primitives are actually quite primitive (pun intended). In this industry, we just made up lots of complicated terms and function names and API descriptions so that people would think that network communication is magic. It's not. It's generally no more complicated than memcpy().

There is a function in C called select(). In the scenario you've described, you need an equivalent of select in Java. And that is, as cpurdy mentioned, Non-blocking Socket I/O or NIO. Cursory googling returned following links:
http://tutorials.jenkov.com/java-nio/socket-channel.html
http://www.owlmountain.com/tutorials/NonBlockingIo.htm
http://rox-xmlrpc.sourceforge.net/niotut/index.htm

You might want to take a look at the Netty project: http://netty.io/
It is very easy with Netty to get started on network programming.

Related

Java Socket, small string needlessly being split across two packets

I was attempting to communicate to a device server with Sockets, after having my little Java program hang when on readLine I ended up having to inject my target application with a packet sniffer and found out that os.writeBytes("notify\n"); was being split to two packets, the first containing n and the next otify, of which the server did not like. I fixed this by adding another writeBytes before hand:
os.writeBytes(" ");
os.writeBytes("notify\n");
os.flush();
This to me seems a bit hacky and potentially unstable, could someone shed some light why I'm having to do this and give me a better solution.
Cheers
When working with raw socket connections, you can never assume that you will get your messages in discrete chunks. In a production environment, its entirely possible you will receive partial messages, or multiple messages at a time.
If you don't want to deal with this, you should consider using a library like Netty which handles these concerns for the programmer.
Having said that, I agree with Thomas that your problem is probably related to your choice of writeBytes.

java socket / output stream writes : do they block?

If I am only WRITING to a socket on an output stream, will it ever block? Only reads can block, right? Someone told me writes can block but I only see a timeout feature for the read method of a socket - Socket.setSoTimeout().
It doesn't make sense to me that a write could block.
A write on a Socket can block too, especially if it is a TCP Socket. The OS will only buffer a certain amount of untransmitted (or transmitted but unacknowledged) data. If you write stuff faster than the remote app is able to read it, the socket will eventually back up and your write calls will block.
It doesn't make sense to me that a write could block.
An OS kernel is unable to provide an unlimited amount of memory for buffering unsent or unacknowledged data. Blocking in write is the simplest way to deal with that.
Responding to these followup questions:
So is there a mechanism to set a
timeout for this? I'm not sure what
behavior it'd have...maybe throw away
data if buffers are full? Or possibly
delete older data in the buffer?
There is no mechanism to set a write timeout on a java.net.Socket. There is a Socket.setSoTimeout() method, but it affects accept() and read() calls ... and not write() calls. Apparently, you can get write timeouts if you use NIO, non-blocking mode, and a Selector, but this is not as useful as you might imagine.
A properly implemented TCP stack does not discard buffered data unless the connection is closed. However, when you get a write timeout, it is uncertain whether the data that is currently in the OS-level buffers has been received by the other end ... or not. The other problem is that you don't know how much of the data from your last write was actually transferred to OS-level TCP stack buffers. Absent some application level protocol for resyncing the stream*, the only safe thing to do after a timeout on write is to shut down the connection.
By contrast, if you use a UDP socket, write() calls won't block for any significant length of time. But the downside is that if there are network problems or the remote application is not keeping up, messages will be dropped on the floor with no notification to either end. In addition, you may find that messages are sometimes delivered to the remote application out of order. It will be up to you (the developer) to deal with these issues.
* It is theoretically possible to do this, but for most applications it makes no sense to implement an additional resyncing mechanism on top of an already reliable (to a point) TCP/IP stream. And if it did make sense, you would also need to deal with the possibility that the connection closed ... so it would be simpler to assume it closed.
The only way to do this is to use NIO and selectors.
See the writeup from the Sun/Oracle engineer in this bug report:
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4031100

Java NIO: Relationship between OP_ACCEPT and OP_READ?

I am re-writing the core NIO server networking code for my project, and I'm trying to figure out when I should "store" connection information for future use. For example, once a client connects in the usual manner, I store and associate the SocketChannel object for that connected client so that I can write data to that client at any time. Generally I use the client's IP address (including port) as the key in a HashMap that maps to the SocketChannel object. That way, I can easily do a lookup on their IP address and asynchronously send data to them via that SocketChannel.
This might not be the best approach, but it works, and the project is too large to change its fundamental networking code, though I would consider suggestions. My main question, however, is this:
At what point should I "store" the SocketChannel for future use? I have been storing a reference to the SocketChannel once the connection is accepted (via OP_ACCEPT). I feel that this is an efficient approach, because I can assume that the map entry already exists when the OP_READ event comes in. Otherwise, I would need to do a computationally expensive check on the HashMap every time OP_READ occurs, and it is obvious that MANY more of those will occur for a client than OP_ACCEPT. My fear, I guess, is that there may be some connections that become accepted (OP_ACCEPT) but never send any data (OP_READ). Perhaps this is possible due to a firewall issue or a malfunctioning client or network adaptor. I think this could lead to "zombie" connections that are not active but also never receive a close message.
Part of my reason for re-writing my network code is that on rare occasions, I get a client connection that has gotten into a strange state. I'm thinking the way I've handled OP_ACCEPT versus OP_READ, including the information I use to assume a connection is "valid" and can be stored, could be wrong.
I'm sorry my question isn't more specific, I'm just looking for the best, most efficient way to determine if a SocketChannel is truly valid so I can store a reference to it. Thanks very much for any help!
If you're using Selectors and non-blocking IO, then you might want to consider letting NIO itself keep track of the association between a channel and it's stateful data. When you call SelectionKey.register(), you can use the three-argument form to pass in an "attachment". At every point in the future, that SelectionKey will always return the attachment object that you provided. (This is pretty clearly inspired by the "void *user_data" type of argument in OS-level APIs.)
That attachment stays with the key, so it's a convenient place to keep state data. The nice thing is that all the mapping from channel to key to attachment will already be handled by NIO, so you do less bookkeeping. Bookkeeping--like Map lookups--can really hurt inside of an IO responder loop.
As an added feature, you can also change the attachment later, so if you needed different state objects for different phases of your protocol, you can keep track of that on the SelectionKey, too.
Regarding the odd state you find your connections in, there are some subtleties in using NIO and selectors that might be biting you. For example, once a SelectionKey signals that it's ready for read, it will continue to be ready for read the next time some other thread calls select(). So, it's easy to end up with multiple threads attempting to read the socket. On the other hand, if you attempt to deregister the key for reading while you're doing the read, then you can end up with threading bugs because SelectionKeys and their interest ops can only be manipulated by the thread that actually calls select(). So, overall, this API has some sharp edges, and it's tricky to get all the state handling correct.
Oh, and one more possibility, depending on who closes the socket first, you may or may not notice a closed socket until you explicitly ask. I can't recall the exact details off the top of my head, but it's something like this: the client half-closes its end of the socket, this does not signal any ready op on the selection key, so the socketchannel never gets read. This can leave a bunch of sockets in TIME_WAIT status on the client.
As a final recommendation, if you're doing async IO, then I definitely recommend a couple of books in the "Pattern Oriented Software Architecture" (POSA) series. Volume 2 deals with a lot of IO patterns. (For instance, NIO lends itself very well to the Reactor pattern from Volume 2. It addresses a bunch of those state handling problems I mention above.) Volume 4 includes those patterns and embeds them in the larger context of distributed systems in general. Both of these books are a very valuable resource.
An alternative may be to look at an existing NIO socket framework, possible candidates are:
Apache MINA
Sun Grizzly
JBoss Netty

Java NIO Threading issue with SocketChannel.write()

Sometimes, while sending a large amount of data via SocketChannel.write(), the underlying TCP buffer gets filled up, and I have to continually re-try the write() until the data is all sent.
So, I might have something like this:
public void send(ByteBuffer bb, SocketChannel sc){
sc.write(bb);
while (bb.remaining()>0){
Thread.sleep(10);
sc.write(bb);
}
}
The problem is that the occasional issue with a large ByteBuffer and an overflowing underlying TCP buffer means that this call to send() will block for an unexpected amount of time. In my project, there are hundreds of clients connected simultaneously, and one delay caused by one socket connection can bring the whole system to a crawl until this one delay with one SocketChannel is resolved. When a delay occurs, it can cause a chain reaction of slowing down in other areas of the project, and having low latency is important.
I need a solution that will take care of this TCP buffer overflow issue transparently and without causing everything to block when multiple calls to SocketChannel.write() are needed. I have considered putting send() into a separate class extending Thread so it runs as its own thread and does not block the calling code. However, I am concerned about the overhead necessary in creating a thread for EACH socket connection I am maintaining, especially when 99% of the time, SocketChannel.write() succeeds on the first try, meaning there's no need for the thread to be there. (In other words, putting send() in a separate thread is really only needed if the while() loop is used -- only in cases where there is a buffer issue, perhaps 1% of the time) If there is a buffer issue only 1% of the time, I don't need the overhead of a thread for the other 99% of calls to send().
I hope that makes sense... I could really use some suggestions. Thanks!
Prior to Java NIO, you had to use one Thread per socket to get good performance. This is a problem for all socket based applications, not just Java. Support for non-blocking IO was added to all operating systems to overcome this. The Java NIO implementation is based on Selectors.
See The definitive Java NIO book and this On Java article to get started. Note however, that this is a complex topic and it still brings some multithreading issues into your code. Google "non blocking NIO" for more information.
The more I read about Java NIO, the more it gives me the willies. Anyway, I think this article answers your problem...
http://weblogs.java.net/blog/2006/05/30/tricks-and-tips-nio-part-i-why-you-must-handle-opwrite
It sounds like this guy has a more elegant solution than the sleep loop.
Also I'm fast coming to the conclusion that using Java NIO by itself is too dangerous. Where I can, I think I'll probably use Apache MINA which provides a nice abstraction above Java NIO and its little 'surprises'.
You don't need the sleep() as the write will either return immediately or block.
You could have an executor which you pass the write to if it doesn't write the first time.
Another option is to have a small pool of thread to perform the writes.
However, the best option for you may be to use a Selector (as has been suggested) so you know when a socket is ready to perform another write.
For hundreds of connections, you probably don't need to bother with NIO. Good old fashioned blocking sockets and threads will do you.
With NIO, you can register interest in OP_WRITE for the selection key, and you will get notified when there is room to write more data.
There are a few things you need to do, assuming you already have a loop using
Selector.select(); to determine which sockets are ready for I/O.
Set the socket channel to non-blocking after you've created it, sc.configureBlocking(false);
Write (possibly parts of) the buffer and check if there's anything left. The buffer itself takes care of current position and how much is left.
Something like
sc.write(bb);
if(sc.remaining() == 0)
//we're done with this buffer, remove it from the select set if there's nothing else to send.
else
//do other stuff/return to select loop
Get rid of your while loop that sleeps
I am facing some of the same issues right now:
- If you have a small amount of connections, but with large transfers, I would just create a threadpool, and let the writes block for the writer threads.
- If you have a lot of connections then you could use full Java NIO, and register OP_WRITE on your accept()ed sockets, and then wait for the selector to come in.
The Orielly Java NIO book has all this.
Also:
http://www.exampledepot.com/egs/java.nio/NbServer.html?l=rel
Some research online has led me to believe NIO is pretty overkill unless you have a lot of incoming connections. Otherwise, if its just a few large transfers - then just use a write thread. It will probably have quicker response. A number of people have issues with NIO not repsonding as quick as they want. Since your write thread is on its own blocking it wont hurt you.

Separate threads for socket input and output

I got assigned to work on some performance and random crashing issues of a multi-threaded java server. Even though threads and thread-safety are not really new topics for me, I found out designing a new multi-threaded application is probably half as difficult as trying to tweak some legacy code. I skimmed through some well known books in search of answers, but the weird thing is, as long as I read about it and analyze the examples provided, everything seems clear. However, the second I look at the code I'm supposed to work on, I'm no longer sure about anything! Must be too much of theoretical knowledge and little real-world experience or something.
Anyway, getting back on topic, as I was doing some on-line research, I came across this piece of code. The question which keeps bothering me is: Is it really safe to invoke getInputStream() and getOutputStream() on the socket from two separate threads without synchronization? Or am I now getting a bit too paranoid about the whole thread-safety issue? Guess that's what happens when like the 5th book in a row tells you how many things can possibly go wrong with concurrency.
PS. Sorry if the question is a bit lengthy or maybe too 'noobie', please be easy on me - that's my first post here.
Edit: Just to be clear, I know sockets work in full-duplex mode and it's safe to concurrently use their input and output streams. Seems fine to me when you acquire those references in the main thread and then initialize thread objects with those, but is it also safe to get those streams in two different threads?
#rsp:
So I've checked Sun's code and PlainSocketImpl does synchronize on those two methods, just as you said. Socket, however, doesn't. getInputStream() and getOutputStream() are pretty much just wrappers for SocketImpl, so probably concurrency issues wouldn't cause the whole server to explode. Still, with a bit of unlucky timing, seems like things could go wrong (e.g. when some other thread closes the socket when the method already checked for error conditions).
As you pointed out, from a code structure standpoint, it would be a good idea to supply each thread with a stream reference instead of a whole socket. I would've probably already restructured the code I'm working on if not for the fact that each thread also uses socket's close() method (e.g. when the socket receives "shutdown" command). As far as I can tell, the main purpose of those threads is to queue messages for sending or for processing, so maybe it's a Single Responsibility Principle violation and those threads shouldn't be able to close the socket (compare with Separated Modem Interface)? But then if I keep analysing the code for too long, it appears the design is generally flawed and the whole thing requires rewriting. Even if the management was willing to pay the price, seriously refactoring legacy code, having no unit tests what so ever and dealing with a hard to debug concurrency issues, would probably do more harm than good. Wouldn't it?
The input stream and output stream of the socket represent two separate datastreams or channels. It is perfectly save using both streams in threads that are not synchronised between them. The socket streams themselves will block reading and writing on empty or full buffers.
Edit: the socket implementation classes from Sun do sychronize the getInputStream() and getOutputStream() methods, calling then from different threads should be OK. I agree with you however that passing the streams to the threads using them might make more sense from a code structure standpoint (dependency injection helps testing for instance.)

Categories