I try to develop a file transfer application in Java, with an applet as client, and a standalone java app as server (on a dedicated machine hosted in a datacenter).
I use DataOutputStream/DataInputStream to transfers the data on both sides.
When I send big volumes of data, the bandwith is very variable : all is okay first, then the tcp stream is freezed during 40-50 seconds while nothing is transferring, and then it starts again.
When I look at the tcp stream with Ethereal, I see duplicate acks, fast retransmits, and tcp retransmits.
But I don't think that the problem is originating from Java : I have the same problem with FTP transfers in FileZilla.
But ... when I try to transfer data using netcat (netcat client + netcat server), all is fine, the bandwith is stable, the tcp lost packets seems to be retransmitted immediately without any pause, no matter of the volume transferred.
It's like if Java was not as talented as netcat to play with tcp streams ...
I tried to play with Socket.setSendBufferSize(), but I didn't see any difference.
Any idea ?
Thanks !
And sorry for my bad english ...
Mr amischiefr is right !
It's the same problem that on the other thread.
My problem was solved by replacing DataXXXputStream by BufferedXXXputstream.
The write(byte[], off, len) methods are the same, and the doc doesn't talk about such different behavior. DataOutputStream is buffered, BufferedOutputStream too, but the second one does it much better.
Thanks !
Sounds more like your network is bogged down and you are seeing TCP windowing (I believe that's the correct term) basically limiting your bandwidth.
Related
I want to create a UDP OUTPUT=UPLOAD stream using java. I will get my INPUT=SOURCE data from a named pipe or contiguous file opened as a file input stream.
My problem is, ALL of the UDP examples i can find on the internet, only demonstrate console sessions. Echo servers and such.
I'm trying to create a way to stream continuous content such as audio/video, and i don't care what gets lost, i'm leaving that up to my user to be concerned with, however my code does need to allow setting the buffer size, and creating a UDP connection.
Ideally a GOOD example would show how to do upload and download mode connections (client mode and server mode)
Can you provide some code to do this, or show me a link on the internets? The fact that I cannot find a UDP stream client/server example is ridiculous. Using UDP to do console sessions tests the limits of a person's sanity! That should never even be considered optional, let alone useful. The client/server code i need must be compatible with GNU netcat. (to ensure correct performance)
I have tried this with a client:
byte[] buffer = new byte[udpPacketSize]; // 4096
int len;
while ((len = standardInput.read(buffer)) != -1) {
udpSocket.send(new DatagramPacket(buffer, len, host, port));
}
But when I stop sending data, and disconnect, I cannot reconnect to send more data. I'm not sure if that is what is supposed to happen, because 1) I am completely out of my element here and 2) when I disconnect after sending the data, the remote instance of GNU netcat does not exit like it does in TCP mode.
HELP, I need a real network systems engineer, to show me how to implement UDP for practical applications!
[and somebody to remove all of that garbage from the internet, but let's keep it simple]
[further: please do not respond with libraries, packages, or shell commands as a solution. i must be able to execute on any embedded device which may not have the programs, and libraries are not teaching me or anyone else how to do anything on their own.]
But when I stop sending data, and disconnect, I cannot reconnect to send more data.
That's the way GNU Netcat UDP mode works, as far as netcat to netcat on a single machine goes...
Your client needs to read a response from the server before disconnecting. So, while acting as "middle-man", you should not be concerned with this, as long as you can connect your network-client's local-client, to the server's response mechanism.
In other words, you need to provide a bi-directional-half-duplex-connection (1:1 communications), since you are not managing the protocol.
Alternatively, you can use a different UDP server than Gnu Netcat. I have tested this one, and it works without the MUST-READ-REPLY-BUG, effectively meaning, there is nothing wrong with my example code. The must-read-reply feature has nothing to do with a correct UDP server implementation (unless you must connect with a Gnu Netcat 0.7.1 compatible server).
It is worth nothing that it isn't very useful to use Gnu Netcat UDP server mode without a driver (script/program) behind it, especially if you want a continuous process, as you could lock your remote client out of access until the process is respawned.
I'm writing highly loaded client/server application. There are cases on some OSes, when connection is lost, but netty doesn't know about it (due to TCP/IP protocol doesn't have pinging). So I decided to implement connection pinging on my app level.
Then I've faced the next problem: ping from server can not reach client and back during reasonable time in cases when server sends too much messages to the client via slow network connection (write buffer high water mark is rather big, several MB). In this case server breaks connection despite its alive and working.
So I've decided to look on IO processing while pinging as well. So I could consider as normal the next situation: when ping is timed out, but bytes from server are still being processed and written to the socket.
However, looks like its impossible in netty to count actual written bytes to socket and measure last to socket write time, because NioSocketChannel.doWrite(ChannelOutboundBuffer in) doesn't have any callbacks for that. And I don't want to hack the netty code by overwriting somehow NioSocketChannel doWrite method.
I'm using netty 4.0.42.
Any help is appreciated!
I am wondering if it is a good idea to have 2 separate ports one for read, one for write ? Can I expect better performance ?
NOTE: Server is Centos, Client is flash, message format in communication is JSON.
There's no significant performance advantage, and it can require much more code to handle two sockets than one, particularly on the server side.
You'd also still have to open both sockets from the client side, as most systems wouldn't permit the server to open a connection back to the client.
AFAIK, TCP is optimised assuming you will send a request and get a response on the same socket, however the difference is likely to be trivial.
Often the simplest solution is also the fastest.
What is the problem you are trying to solve?
Best to have it in TCP with a single port, also depending if you are using NIO or not,
Just in case you want to have 2 ports & Unless its not a TCP (eg UDP)
If you are in Cent OS 32 bit, ensure that your kernel to use up more ports that it should.
This is to prevent port starvation & would quickly cripple your server.
Do the math, if you need to support 100 users, 100 x 2 = 200 open ports.
but in most cases, its only (65534 - 1024) ports available, so, if you could afford it, then its cool.
Also remember that most ISP's would block certain ports, so keep the right ports open for read / write.
regards
I need to connect to a hardware device which is in LAN, and we need to ping that device through some hex code and the device is sending some response to the ping command. How do I read that response?
You are going to need to look up networking code, specifically, sockets. Although it takes a bit of extra learning, sockets are best handle using a few threads, unless you are willing to have your application wait for the response.
I would give more info, but I doubt you will even notice this, if you do I can add some more info for you.
well, i am developing a single server multiple-client program in java. My problem is can i use single stream for all the clients or do i have to create a seperate stream for each client?
please help
thank you
Typically you'd need a stream per client. In some cases you can get away with UDP and multicasting, but it doesn't sound like a great idea for a chat server.
Usually it's easy to get a stream per client with no extra work, because each client will connect to the server anyway, and a stream can easily be set up over that connection.
Yes, you can but I think it would be harder.
If you're using java.net.ServerSocket then each client accepted through:
Socket client = server.accept();
Will have it's own stream so you don't have to do anything else.
Is there a real need for a single stream for all clients or is just something you think it would help.
For the later it could cause more problems than those is solve.
Can you do it?
Yes, as Jon Skeet said, you can use multicasting.
Should you do it?
That depends on what you are using the streams for.
For most client server applications, you will need a stream per client to maintain independent communications. Of course, there are applications where using multicasting is the right approach, such as live video streaming. In such a case, you would not want to overwhelm your network while streaming the same data to multiple clients. Of course, even in this case there will typically be a single control channel of some sort between each client and server.